snowflakedb / spark-snowflake

Snowflake Data Source for Apache Spark.
http://www.snowflake.net
Apache License 2.0
211 stars 98 forks source link

SnowflakeSQLLoggedException: No response or invalid response from GET request. Error: {} #537

Open mauriciojost opened 8 months ago

mauriciojost commented 8 months ago

We're using this library, in version v2.12.0-spark_3.4. We're running a Spark 3.4 streaming query with foreachBatch(...) in append mode that writes into Snowflake.

The function called by foreachBatch(...) looks something like this:

    inputStreamingDf.write
      .format(SNOWFLAKE_SOURCE_SHORT_NAME)
      .option("column_mapping", "name")
      .options(options)
      .option("dbtable", outputTableName)
      .mode(SaveMode.Append)
      .save()

While running the query, we observe failures with this stacktrace:

Caused by: net.snowflake.client.jdbc.SnowflakeSQLLoggedException: No response or invalid response from GET request. Error: {}
    at net.snowflake.client.core.SFSession.getQueryStatus(SFSession.java:190)
    at net.snowflake.client.jdbc.SFAsyncResultSet.getStatus(SFAsyncResultSet.java:101)
    at net.snowflake.client.jdbc.SFAsyncResultSet.getRealResults(SFAsyncResultSet.java:162)
    at net.snowflake.client.jdbc.SFAsyncResultSet.getMetaData(SFAsyncResultSet.java:277)
    at net.snowflake.spark.snowflake.io.StageWriter$.executeCopyIntoTable(StageWriter.scala:603)
    at net.snowflake.spark.snowflake.io.StageWriter$.writeToTableWithStagingTable(StageWriter.scala:471)
    at net.snowflake.spark.snowflake.io.StageWriter$.writeToTable(StageWriter.scala:299)
    at net.snowflake.spark.snowflake.io.StageWriter$.writeToStage(StageWriter.scala:238)
    at net.snowflake.spark.snowflake.io.package$.writeRDD(package.scala:106)
    at net.snowflake.spark.snowflake.SnowflakeWriter.save(SnowflakeWriter.scala:91)
    at net.snowflake.spark.snowflake.DefaultSource.createRelation(DefaultSource.scala:156)
    at org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:49)
    at org.apache.spark.sql.execution.command.ExecutedCommandExec.$anonfun$sideEffectResult$1(commands.scala:82)
    at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94)
    at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:80)
    at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:79)
    at org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:91)
    at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$3(QueryExecution.scala:272)
    at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:166)
  ...

Interestingly we observe that despite the failures, everything is correctly pushed to Snowflake. The failures occur randomly, we failed to correlate it with variables like number of rows transfered, duration or write, etc.

We understand that StageWriter.executeCopyIntoTable(...) (https://github.com/snowflakedb/spark-snowflake/blob/v2.12.0-spark_3.4/src/main/scala/net/snowflake/spark/snowflake/io/StageWriter.scala#L489) can be run in two modes, depending on the value of params.isExecuteQueryWithSyncMode.

  1. Could you share some insight on why this could be happening?
  2. What is your advice in the short-term as a workaround? Would you recommend tweaking params.isExecuteQueryWithSyncMode to bypass that section of the code, if not, why?

Thanks for your help,

Mauricio & @danjok

XinEDprob commented 6 months ago

Hi @mauriciojost, did you solve this issue? I encountered the same error a few weeks ago and could not find a solution.

mauriciojost commented 5 months ago

For now all is working perfectly with this streaming option:

...
.option("params.isExecuteQueryWithSyncMode", "true")
...

It completely bypasses the call status async wait code section, which is where the original issue occurs (see stack-trace).

XinEDprob commented 5 months ago

@mauriciojost thanks for the update. I also noticed that it seems the root cause of this issue is from snowflake JDBC and it was confirmed that is something needed to improve.