microsoft / sql-spark-connector

Apache Spark Connector for SQL Server and Azure SQL
Apache License 2.0
273 stars 116 forks source link

"java.io.InvalidClassException: failed to read class descriptor" Error when trying to write dataframe #260

Open ANL06488 opened 3 months ago

ANL06488 commented 3 months ago

Trying to write a dataframe to SQL Server using:

(
    df.write.format("com.microsoft.sqlserver.jdbc.spark")
    .option("schemaCheckEnabled", "false")
    .option("url", CON_STR)
    .option("tableLock", True)
    .option("batchSize", 5_000)
    .option("dbtable", f"[{schema}].[{table}]")
    .mode("append")
    .save()
)

Is causing the following error:

ProtoSerializer: Failed to deserialize remote exception      
java.io.InvalidClassException: failed to read class descriptor
        at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1977)  
        at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1850)     
        at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2160)
        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1667)
        at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2405)
        at java.io.ObjectInputStream.defaultReadObject(ObjectInputStream.java:633)
        at java.lang.Throwable.readObject(Throwable.java:915)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1184)
        at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2296)
        at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2187)
        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1667)
        at java.io.ObjectInputStream.readObject(ObjectInputStream.java:503)
        at java.io.ObjectInputStream.readObject(ObjectInputStream.java:461)
        at org.apache.spark.sql.util.ProtoSerializer.$anonfun$deserializeObject$1(ProtoSerializer.scala:7543)
        at scala.util.DynamicVariable.withValue(DynamicVariable.scala:62)
        at org.apache.spark.sql.util.ProtoSerializer.deserializeObject(ProtoSerializer.scala:7528)
        at org.apache.spark.sql.util.ProtoSerializer.deserializeException(ProtoSerializer.scala:7559)
        at com.databricks.service.SparkServiceRemoteFuncRunner.$anonfun$executeRPC$1(SparkServiceRemoteFuncRunner.scala:192)
        at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:24)
        at com.databricks.service.SparkServiceRemoteFuncRunner.executeRPC(SparkServiceRemoteFuncRunner.scala:159)
        at com.databricks.service.SparkServiceRemoteFuncRunner.executeRPCHandleCancels(SparkServiceRemoteFuncRunner.scala:291)
        at com.databricks.service.SparkServiceRemoteFuncRunner.$anonfun$execute0$2(SparkServiceRemoteFuncRunner.scala:119)
        at scala.util.DynamicVariable.withValue(DynamicVariable.scala:62)
        at com.databricks.service.SparkServiceRemoteFuncRunner.withRetry(SparkServiceRemoteFuncRunner.scala:139)
        at com.databricks.service.SparkServiceRemoteFuncRunner.$anonfun$execute0$1(SparkServiceRemoteFuncRunner.scala:114)
        at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:24)
        at com.databricks.service.SparkServiceRemoteFuncRunner.execute0(SparkServiceRemoteFuncRunner.scala:114)
        at com.databricks.service.SparkServiceRemoteFuncRunner.$anonfun$execute$1(SparkServiceRemoteFuncRunner.scala:86)
        at com.databricks.spark.util.Log4jUsageLogger.recordOperation(UsageLogger.scala:251)
        at com.databricks.spark.util.UsageLogging.recordOperation(UsageLogger.scala:433)
        at com.databricks.spark.util.UsageLogging.recordOperation$(UsageLogger.scala:412)
        at com.databricks.service.SparkServiceRPCClientStub.recordOperation(SparkServiceRPCClientStub.scala:58)
        at com.databricks.service.SparkServiceRemoteFuncRunner.execute(SparkServiceRemoteFuncRunner.scala:78)
        at com.databricks.service.SparkServiceRemoteFuncRunner.execute$(SparkServiceRemoteFuncRunner.scala:67)
        at com.databricks.service.SparkServiceRPCClientStub.execute(SparkServiceRPCClientStub.scala:58)
        at com.databricks.service.SparkServiceRPCClientStub.executePlan(SparkServiceRPCClientStub.scala:198)
        at org.apache.spark.sql.DataFrameWriter.doRemoteSave(DataFrameWriter.scala:984)
        at org.apache.spark.sql.DataFrameWriter.saveInternal(DataFrameWriter.scala:262)
        at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:258)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
        at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:380)
        at py4j.Gateway.invoke(Gateway.java:306)
        at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
        at py4j.commands.CallCommand.execute(CallCommand.java:79)
        at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:195)
        at py4j.ClientServerConnection.run(ClientServerConnection.java:115)
        at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ClassNotFoundException: com.microsoft.sqlserver.jdbc.SQLServerException
        at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
        at java.lang.Class.forName0(Native Method)
        at java.lang.Class.forName(Class.java:348)
        at org.apache.spark.util.Utils$.classForName(Utils.scala:263)
        at org.apache.spark.sql.util.SparkServiceObjectInputStream.readResolveClassDescriptor(SparkServiceObjectInputStream.scala:60)
        at org.apache.spark.sql.util.SparkServiceObjectInputStream.readClassDescriptor(SparkServiceObjectInputStream.scala:55)
        at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1975)
        ... 54 more
Traceback (most recent call last):
  [REDACTED]
    df.write.format("com.microsoft.sqlserver.jdbc.spark")
  File "C:\Repos\data-trusted-sql\.venv\lib\site-packages\pyspark\instrumentation_utils.py", line 48, in wrapper
    res = func(*args, **kwargs)
  File "C:\Repos\data-trusted-sql\.venv\lib\site-packages\pyspark\sql\readwriter.py", line 1395, in save
    self._jwrite.save()
  File "C:\Repos\data-trusted-sql\.venv\lib\site-packages\py4j\java_gateway.py", line 1321, in __call__
    return_value = get_return_value(
  File "C:\Repos\data-trusted-sql\.venv\lib\site-packages\pyspark\errors\exceptions.py", line 228, in deco
    return f(*a, **kw)
  File "C:\Repos\data-trusted-sql\.venv\lib\site-packages\py4j\protocol.py", line 326, in get_return_value
    raise Py4JJavaError(
py4j.protocol.Py4JJavaError: An error occurred while calling o481.save.
: com.databricks.service.SparkServiceRemoteException: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1460.0 failed 4 times, most recent failure: Lost task 0.3 in stage 1460.0 (TID 3183) (10.161.130.19 executor 20): com.microsoft.sqlserver.jdbc.SQLServerException: The connection is closed.
        at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDriverError(SQLServerException.java:237)
        at com.microsoft.sqlserver.jdbc.SQLServerConnection.checkClosed(SQLServerConnection.java:1555)
        at com.microsoft.sqlserver.jdbc.SQLServerConnection.rollback(SQLServerConnection.java:4239)
        at com.microsoft.sqlserver.jdbc.spark.BulkCopyUtils$.savePartition(BulkCopyUtils.scala:68)
        at com.microsoft.sqlserver.jdbc.spark.SingleInstanceWriteStrategies$.$anonfun$write$2(BestEffortSingleInstanceStrategy.scala:43)
        at com.microsoft.sqlserver.jdbc.spark.SingleInstanceWriteStrategies$.$anonfun$write$2$adapted(BestEffortSingleInstanceStrategy.scala:42)
        at org.apache.spark.rdd.RDD.$anonfun$foreachPartition$2(RDD.scala:1059)
        at org.apache.spark.rdd.RDD.$anonfun$foreachPartition$2$adapted(RDD.scala:1059)
        at org.apache.spark.SparkContext.$anonfun$runJob$2(SparkContext.scala:2798)
        at org.apache.spark.scheduler.ResultTask.$anonfun$runTask$3(ResultTask.scala:75)
        at com.databricks.spark.util.ExecutorFrameProfiler$.record(ExecutorFrameProfiler.scala:110)
        at org.apache.spark.scheduler.ResultTask.$anonfun$runTask$1(ResultTask.scala:75)
        at com.databricks.spark.util.ExecutorFrameProfiler$.record(ExecutorFrameProfiler.scala:110)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:55)
        at org.apache.spark.scheduler.Task.doRunTask(Task.scala:179)
        at org.apache.spark.scheduler.Task.$anonfun$run$5(Task.scala:142)
        at com.databricks.unity.EmptyHandle$.runWithAndClose(UCSHandle.scala:126)
        at org.apache.spark.scheduler.Task.$anonfun$run$1(Task.scala:142)
        at com.databricks.spark.util.ExecutorFrameProfiler$.record(ExecutorFrameProfiler.scala:110)
        at org.apache.spark.scheduler.Task.run(Task.scala:97)
        at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$13(Executor.scala:904)
        at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1740)
        at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$4(Executor.scala:907)
        at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
        at com.databricks.spark.util.ExecutorFrameProfiler$.record(ExecutorFrameProfiler.scala:110)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:761)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:750)

Driver stacktrace:
        at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:3424)
        at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:3346)
        at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:3335)
        at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
        at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
        at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:3335)
        at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1444)
        at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1444)
        at scala.Option.foreach(Option.scala:407)
        at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1444)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:3635)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:3573)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:3561)
        at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:51)
        at org.apache.spark.scheduler.DAGScheduler.$anonfun$runJob$1(DAGScheduler.scala:1193)
        at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
        at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:80)
        at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:1181)
        at org.apache.spark.SparkContext.runJobInternal(SparkContext.scala:2758)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2741)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2779)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2798)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2823)
        at org.apache.spark.rdd.RDD.$anonfun$foreachPartition$1(RDD.scala:1059)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:165)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:125)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
        at org.apache.spark.rdd.RDD.withScope(RDD.scala:445)
        at org.apache.spark.rdd.RDD.foreachPartition(RDD.scala:1057)
        at com.microsoft.sqlserver.jdbc.spark.SingleInstanceWriteStrategies$.write(BestEffortSingleInstanceStrategy.scala:42)
        at com.microsoft.sqlserver.jdbc.spark.SingleInstanceConnector$.writeInParallel(SingleInstanceConnector.scala:35)
        at com.microsoft.sqlserver.jdbc.spark.Connector.write(Connector.scala:80)
        at com.microsoft.sqlserver.jdbc.spark.DefaultSource.createRelation(DefaultSource.scala:66)
        at org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:49)
        at org.apache.spark.sql.execution.command.ExecutedCommandExec.$anonfun$sideEffectResult$1(commands.scala:82)
        at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:80)
        at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:80)
        at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:79)
        at org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:91)
        at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$3(QueryExecution.scala:256)
        at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:165)
        at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$2(QueryExecution.scala:256)
        at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withCustomExecutionEnv$9(SQLExecution.scala:258)
        at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:448)
        at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withCustomExecutionEnv$1(SQLExecution.scala:203)
        at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1073)
        at org.apache.spark.sql.execution.SQLExecution$.withCustomExecutionEnv(SQLExecution.scala:131)
        at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:398)
        at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$1(QueryExecution.scala:255)
        at org.apache.spark.sql.execution.QueryExecution.org$apache$spark$sql$execution$QueryExecution$$withMVTagsIfNecessary(QueryExecution.scala:238)
        at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.applyOrElse(QueryExecution.scala:251)
        at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.applyOrElse(QueryExecution.scala:244)
        at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:519)
        at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:106)
        at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:519)
        at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:32)
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:339)
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:335)
        at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:32)
        at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:32)
        at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:495)
        at org.apache.spark.sql.execution.QueryExecution.$anonfun$eagerlyExecuteCommands$1(QueryExecution.scala:244)
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:395)
        at org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:244)
        at org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:198)
        at org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:189)
        at org.apache.spark.sql.execution.QueryExecution.assertCommandExecuted(QueryExecution.scala:305)
        at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:964)
        at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:429)
        at org.apache.spark.sql.DataFrameWriter.saveInternal(DataFrameWriter.scala:396)
        at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:258)
        at com.databricks.service.DataFrameWriteCommand.run(DataFrameWriteCommand.scala:70)
        at org.apache.spark.sql.execution.command.ExecutedCommandExec.$anonfun$sideEffectResult$1(commands.scala:82)
        at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:80)
        at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:80)
        at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:79)
        at org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:91)
        at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$3(QueryExecution.scala:256)
        at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:165)
        at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$2(QueryExecution.scala:256)
        at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withCustomExecutionEnv$9(SQLExecution.scala:258)
        at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:448)
        at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withCustomExecutionEnv$1(SQLExecution.scala:203)
        at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1073)
        at org.apache.spark.sql.execution.SQLExecution$.withCustomExecutionEnv(SQLExecution.scala:131)
        at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:398)
        at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$1(QueryExecution.scala:255)
        at org.apache.spark.sql.execution.QueryExecution.org$apache$spark$sql$execution$QueryExecution$$withMVTagsIfNecessary(QueryExecution.scala:238)
        at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.applyOrElse(QueryExecution.scala:251)
        at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.applyOrElse(QueryExecution.scala:244)
        at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:519)
        at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:106)
        at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:519)
        at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:32)
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:339)
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:335)
        at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:32)
        at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:32)
        at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:495)
        at org.apache.spark.sql.execution.QueryExecution.$anonfun$eagerlyExecuteCommands$1(QueryExecution.scala:244)
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:395)
        at org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:244)
        at org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:198)
        at org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:189)
        at org.apache.spark.sql.execution.QueryExecution.assertCommandExecuted(QueryExecution.scala:305)
        at org.apache.spark.sql.execution.QueryExecution.optimizedPlan$lzycompute(QueryExecution.scala:310)
        at org.apache.spark.sql.execution.QueryExecution.optimizedPlan(QueryExecution.scala:307)
        at org.apache.spark.sql.execution.QueryExecution.$anonfun$writePlans$4(QueryExecution.scala:543)
        at org.apache.spark.sql.catalyst.plans.QueryPlan$.append(QueryPlan.scala:751)
        at org.apache.spark.sql.execution.QueryExecution.writePlans(QueryExecution.scala:543)
        at org.apache.spark.sql.execution.QueryExecution.toString(QueryExecution.scala:560)
        at org.apache.spark.sql.execution.QueryExecution.toString(QueryExecution.scala:553)
        at java.lang.String.valueOf(String.java:2994)
        at java.lang.StringBuilder.append(StringBuilder.java:136)
        at com.databricks.service.SparkServiceImpl$.$anonfun$executePlan$2(SparkServiceImpl.scala:113)
        at org.apache.spark.internal.Logging.logInfo(Logging.scala:61)
        at org.apache.spark.internal.Logging.logInfo$(Logging.scala:60)
        at com.databricks.service.SparkServiceImpl$.logInfo(SparkServiceImpl.scala:58)
        at com.databricks.service.SparkServiceImpl$.$anonfun$executePlan$1(SparkServiceImpl.scala:113)
        at com.databricks.logging.UsageLogging.$anonfun$recordOperation$1(UsageLogging.scala:560)
        at com.databricks.logging.UsageLogging.executeThunkAndCaptureResultTags$1(UsageLogging.scala:657)
        at com.databricks.logging.UsageLogging.$anonfun$recordOperationWithResultTags$4(UsageLogging.scala:678)
        at com.databricks.logging.UsageLogging.$anonfun$withAttributionContext$1(UsageLogging.scala:414)
        at scala.util.DynamicVariable.withValue(DynamicVariable.scala:62)
        at com.databricks.logging.AttributionContext$.withValue(AttributionContext.scala:158)
        at com.databricks.logging.UsageLogging.withAttributionContext(UsageLogging.scala:412)
        at com.databricks.logging.UsageLogging.withAttributionContext$(UsageLogging.scala:409)
        at com.databricks.spark.util.PublicDBLogging.withAttributionContext(DatabricksSparkUsageLogger.scala:27)
        at com.databricks.logging.UsageLogging.withAttributionTags(UsageLogging.scala:457)
        at com.databricks.logging.UsageLogging.withAttributionTags$(UsageLogging.scala:442)
        at com.databricks.spark.util.PublicDBLogging.withAttributionTags(DatabricksSparkUsageLogger.scala:27)
        at com.databricks.logging.UsageLogging.recordOperationWithResultTags(UsageLogging.scala:652)
        at com.databricks.logging.UsageLogging.recordOperationWithResultTags$(UsageLogging.scala:569)
        at com.databricks.spark.util.PublicDBLogging.recordOperationWithResultTags(DatabricksSparkUsageLogger.scala:27)
        at com.databricks.logging.UsageLogging.recordOperation(UsageLogging.scala:560)
        at com.databricks.logging.UsageLogging.recordOperation$(UsageLogging.scala:528)
        at com.databricks.spark.util.PublicDBLogging.recordOperation(DatabricksSparkUsageLogger.scala:27)
        at com.databricks.spark.util.PublicDBLogging.recordOperation0(DatabricksSparkUsageLogger.scala:68)
        at com.databricks.spark.util.DatabricksSparkUsageLogger.recordOperation(DatabricksSparkUsageLogger.scala:150)
        at com.databricks.spark.util.UsageLogger.recordOperation(UsageLogger.scala:72)
        at com.databricks.spark.util.UsageLogger.recordOperation$(UsageLogger.scala:59)
        at com.databricks.spark.util.DatabricksSparkUsageLogger.recordOperation(DatabricksSparkUsageLogger.scala:109)
        at com.databricks.spark.util.UsageLogging.recordOperation(UsageLogger.scala:433)
        at com.databricks.spark.util.UsageLogging.recordOperation$(UsageLogger.scala:412)
        at com.databricks.service.SparkServiceImpl$.recordOperation(SparkServiceImpl.scala:92)
        at com.databricks.service.SparkServiceImpl$.executePlan(SparkServiceImpl.scala:111)
        at com.databricks.service.SparkServiceRPCHandler.execute0(SparkServiceRPCHandler.scala:695)
        at com.databricks.service.SparkServiceRPCHandler.$anonfun$executeRPC0$1(SparkServiceRPCHandler.scala:500)
        at scala.util.DynamicVariable.withValue(DynamicVariable.scala:62)
        at com.databricks.service.SparkServiceRPCHandler.executeRPC0(SparkServiceRPCHandler.scala:372)
        at com.databricks.service.SparkServiceRPCHandler$$anon$2.call(SparkServiceRPCHandler.scala:323)
        at com.databricks.service.SparkServiceRPCHandler$$anon$2.call(SparkServiceRPCHandler.scala:309)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at com.databricks.service.SparkServiceRPCHandler.$anonfun$executeRPC$1(SparkServiceRPCHandler.scala:359)
        at scala.util.DynamicVariable.withValue(DynamicVariable.scala:62)
        at com.databricks.service.SparkServiceRPCHandler.executeRPC(SparkServiceRPCHandler.scala:336)
        at com.databricks.service.SparkServiceRPCServlet.doPost(SparkServiceRPCServer.scala:167)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:523)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:590)
        at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:799)
        at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:550)
        at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:190)
        at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:501)
        at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
        at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
        at org.eclipse.jetty.server.Server.handle(Server.java:516)
        at org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:487)
        at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:732)
        at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:479)
        at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:277)
        at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)
        at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:105)
        at org.eclipse.jetty.io.ChannelEndPoint$1.run(ChannelEndPoint.java:104)
        at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:338)
        at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:315)
        at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173)
        at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:131)
        at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:409)
        at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883)
        at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034)
        at java.lang.Thread.run(Thread.java:750)
Caused by: com.microsoft.sqlserver.jdbc.SQLServerException: The connection is closed.
        at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDriverError(SQLServerException.java:237)
        at com.microsoft.sqlserver.jdbc.SQLServerConnection.checkClosed(SQLServerConnection.java:1555)
        at com.microsoft.sqlserver.jdbc.SQLServerConnection.rollback(SQLServerConnection.java:4239)
        at com.microsoft.sqlserver.jdbc.spark.BulkCopyUtils$.savePartition(BulkCopyUtils.scala:68)
        at com.microsoft.sqlserver.jdbc.spark.SingleInstanceWriteStrategies$.$anonfun$write$2(BestEffortSingleInstanceStrategy.scala:43)
        at com.microsoft.sqlserver.jdbc.spark.SingleInstanceWriteStrategies$.$anonfun$write$2$adapted(BestEffortSingleInstanceStrategy.scala:42)
        at org.apache.spark.rdd.RDD.$anonfun$foreachPartition$2(RDD.scala:1059)
        at org.apache.spark.rdd.RDD.$anonfun$foreachPartition$2$adapted(RDD.scala:1059)
        at org.apache.spark.SparkContext.$anonfun$runJob$2(SparkContext.scala:2798)
        at org.apache.spark.scheduler.ResultTask.$anonfun$runTask$3(ResultTask.scala:75)
        at com.databricks.spark.util.ExecutorFrameProfiler$.record(ExecutorFrameProfiler.scala:110)
        at org.apache.spark.scheduler.ResultTask.$anonfun$runTask$1(ResultTask.scala:75)
        at com.databricks.spark.util.ExecutorFrameProfiler$.record(ExecutorFrameProfiler.scala:110)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:55)
        at org.apache.spark.scheduler.Task.doRunTask(Task.scala:179)
        at org.apache.spark.scheduler.Task.$anonfun$run$5(Task.scala:142)
        at com.databricks.unity.EmptyHandle$.runWithAndClose(UCSHandle.scala:126)
        at org.apache.spark.scheduler.Task.$anonfun$run$1(Task.scala:142)
        at com.databricks.spark.util.ExecutorFrameProfiler$.record(ExecutorFrameProfiler.scala:110)
        at org.apache.spark.scheduler.Task.run(Task.scala:97)
        at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$13(Executor.scala:904)
        at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1740)
        at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$4(Executor.scala:907)
        at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
        at com.databricks.spark.util.ExecutorFrameProfiler$.record(ExecutorFrameProfiler.scala:110)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:761)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        ... 1 more

        at org.apache.spark.sql.util.ProtoSerializer.deserializeException(ProtoSerializer.scala:7565)
        at com.databricks.service.SparkServiceRemoteFuncRunner.$anonfun$executeRPC$1(SparkServiceRemoteFuncRunner.scala:192)
        at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:24)
        at com.databricks.service.SparkServiceRemoteFuncRunner.executeRPC(SparkServiceRemoteFuncRunner.scala:159)
        at com.databricks.service.SparkServiceRemoteFuncRunner.executeRPCHandleCancels(SparkServiceRemoteFuncRunner.scala:291)
        at com.databricks.service.SparkServiceRemoteFuncRunner.$anonfun$execute0$2(SparkServiceRemoteFuncRunner.scala:119)
        at scala.util.DynamicVariable.withValue(DynamicVariable.scala:62)
        at com.databricks.service.SparkServiceRemoteFuncRunner.withRetry(SparkServiceRemoteFuncRunner.scala:139)
        at com.databricks.service.SparkServiceRemoteFuncRunner.$anonfun$execute0$1(SparkServiceRemoteFuncRunner.scala:114)
        at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:24)
        at com.databricks.service.SparkServiceRemoteFuncRunner.execute0(SparkServiceRemoteFuncRunner.scala:114)
        at com.databricks.service.SparkServiceRemoteFuncRunner.$anonfun$execute$1(SparkServiceRemoteFuncRunner.scala:86)
        at com.databricks.spark.util.Log4jUsageLogger.recordOperation(UsageLogger.scala:251)
        at com.databricks.spark.util.UsageLogging.recordOperation(UsageLogger.scala:433)
        at com.databricks.spark.util.UsageLogging.recordOperation$(UsageLogger.scala:412)
        at com.databricks.service.SparkServiceRPCClientStub.recordOperation(SparkServiceRPCClientStub.scala:58)
        at com.databricks.service.SparkServiceRemoteFuncRunner.execute(SparkServiceRemoteFuncRunner.scala:78)
        at com.databricks.service.SparkServiceRemoteFuncRunner.execute$(SparkServiceRemoteFuncRunner.scala:67)
        at com.databricks.service.SparkServiceRPCClientStub.execute(SparkServiceRPCClientStub.scala:58)
        at com.databricks.service.SparkServiceRPCClientStub.executePlan(SparkServiceRPCClientStub.scala:198)
        at org.apache.spark.sql.DataFrameWriter.doRemoteSave(DataFrameWriter.scala:984)
        at org.apache.spark.sql.DataFrameWriter.saveInternal(DataFrameWriter.scala:262)
        at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:258)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
        at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:380)
        at py4j.Gateway.invoke(Gateway.java:306)
        at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
        at py4j.commands.CallCommand.execute(CallCommand.java:79)
        at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:195)
        at py4j.ClientServerConnection.run(ClientServerConnection.java:115)
        at java.lang.Thread.run(Thread.java:748)

Can't really seem to pin down what is causing the error.