spark-redshift-community / spark-redshift

Performant Redshift data source for Apache Spark
Apache License 2.0
135 stars 62 forks source link

Null Pointer Exception When Writing To Redshift #127

Open b2jant opened 1 year ago

b2jant commented 1 year ago

Not sure why writing to Redshift stopped working currently getting the following error:

Function Writing To Redshift: dataframe.write.format("io.github.spark_redshift_community.spark.redshift").options(**options).option("dbtable", table_name).option("tempdir", f"s3a://{bucket_name}/{redshift_tmp_prefix}").option("tempformat", "CSV").option("forward_spark_s3_credentials","true").option("aws_access_key_id",AWS_ACCESS_KEY_ID).option("aws_secret_access_key",AWS_SECRET_ACCESS_KEY).mode("overwrite").save()

Error Message: Py4JJavaError: An error occurred while calling o132.save. : java.lang.NullPointerException at java.base/java.util.concurrent.ConcurrentHashMap.putVal(ConcurrentHashMap.java:1011) at java.base/java.util.concurrent.ConcurrentHashMap.put(ConcurrentHashMap.java:1006) at java.base/java.util.Properties.put(Properties.java:1340) at java.base/java.util.Properties.setProperty(Properties.java:228) at io.github.spark_redshift_community.spark.redshift.JDBCWrapper.$anonfun$getConnector$2(RedshiftJDBCWrapper.scala:230) at scala.Option.foreach(Option.scala:407) at io.github.spark_redshift_community.spark.redshift.JDBCWrapper.getConnector(RedshiftJDBCWrapper.scala:228) at io.github.spark_redshift_community.spark.redshift.RedshiftWriter.saveToRedshift(RedshiftWriter.scala:406) at io.github.spark_redshift_community.spark.redshift.DefaultSource.createRelation(DefaultSource.scala:110) at org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:46) at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70) at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68) at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:90) at org.apache.spark.sql.execution.SparkPlan.$anonfun$execute$1(SparkPlan.scala:180) at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:218) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:215) at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:176) at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:132) at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:131) at org.apache.spark.sql.DataFrameWriter.$anonfun$runCommand$1(DataFrameWriter.scala:989) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:103) at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:163) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:90) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64) at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:989) at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:438) at org.apache.spark.sql.DataFrameWriter.saveInternal(DataFrameWriter.scala:415) at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:301) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at py4j.Gateway.invoke(Gateway.java:282) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.GatewayConnection.run(GatewayConnection.java:238) at java.base/java.lang.Thread.run(Thread.java:829)

jsleight commented 1 year ago

From io.github.spark_redshift_community.spark.redshift.JDBCWrapper.$anonfun$getConnector$2(RedshiftJDBCWrapper.scala:230) it seems like the password you're providing is null.

Since you're using forward_spark_s3_credentials, that means your Spark cluster's AWSCredentialsProvider is busted.