MicrosoftDocs / azure-docs

Open source documentation of Microsoft Azure
https://docs.microsoft.com/azure
Creative Commons Attribution 4.0 International
10.29k stars 21.47k forks source link

Azure Synapse - Failed to classify the current request into a workload group. #101883

Closed MohammedL365 closed 5 months ago

MohammedL365 commented 1 year ago

Azure Synapse Analytics failed to execute the JDBC query produced by the connector. Underlying SQLException(s):

Technology/Service - https://learn.microsoft.com/en-us/azure/synapse-analytics/get-started-analyze-sql-pool

Very Often I am encountering databricks notebook failures while trying to execute query against synapse dedicated sql pool. The error message seen is shown below.

Azure support team advised that synapse product team is tracking this issue for a future release but don't have an eta... I could not find this issue upon searching so logging it here.


Py4JJavaError Traceback (most recent call last) /databricks/python/lib/python3.8/site-packages/adh_spark_data_processor/deltalake_to_sql_incremental_load_processor.py in initiate_merge_silver_to_gold(self, sparksession, source_silver_table, drop_silver_system_columns, primary_key_list) 469 temp_delete_tablenme = "Temp" + self._entity_to_process._target_object --> 470 self._load_delete_data(self._acd_db_jdbc_util, df, temp_delete_table_nme, self._gold_dml_helper.targetTableNameParam, primary_key_list[0]) 471 print("start publishing delete list")

/databricks/python/lib/python3.8/site-packages/adh_spark_data_processor/deltalake_to_sql_incremental_load_processor.py in _load_delete_data(self, acd_jdbc_util, delete_data_df, temp_delete_table_name, target_table_name, primary_key) 520 --> 521 acd_jdbc_util.execute_bulk_load_with_postactions(delete_data_df, temp_delete_table_name , "overwrite", post_actions_str) 522

/databricks/python/lib/python3.8/site-packages/sparksharedcode/acddhsparkjdbcutil.py in execute_bulk_load_with_postactions(self, bulk_load_df, target_table, mode, post_actions) 63 ---> 64 bulk_load_df.repartition(20).write \ 65 .mode(mode) \

/databricks/spark/python/pyspark/sql/readwriter.py in save(self, path, format, mode, partitionBy, **options) 1133 if path is None: -> 1134 self._jwrite.save() 1135 else:

/databricks/spark/python/lib/py4j-0.10.9-src.zip/py4j/java_gateway.py in call(self, *args) 1303 answer = self.gateway_client.send_command(command) -> 1304 return_value = get_return_value( 1305 answer, self.gateway_client, self.target_id, self.name)

/databricks/spark/python/pyspark/sql/utils.py in deco(*a, *kw) 116 try: --> 117 return f(a, **kw) 118 except py4j.protocol.Py4JJavaError as e:

/databricks/spark/python/lib/py4j-0.10.9-src.zip/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name) 325 if answer[1] == REFERENCE_TYPE: --> 326 raise Py4JJavaError( 327 "An error occurred while calling {0}{1}{2}.\n".

Py4JJavaError: An error occurred while calling o1740.save. : com.databricks.spark.sqldw.SqlDWSideException: Azure Synapse Analytics failed to execute the JDBC query produced by the connector. Underlying SQLException(s):

During handling of the above exception, another exception occurred:

Exception Traceback (most recent call last)

in 249 acd_sql_connection_config,acd_db_utility, 250 acd_uuid_generator, acd_uuid_salt, gold_database_dml) --> 251 acd_upsert_request.process_incremental_load(acd_sparksession) 252 253 /databricks/python/lib/python3.8/site-packages/adh_spark_data_processor/deltalake_to_sql_incremental_load_processor.py in process_incremental_load(self, sparksession) 645 646 start_time_last = time.time() --> 647 self.initiate_merge_silver_to_gold(sparksession, silver_table_name, drop_silver_system_columns, key_column_list) 648 print("publish-to-synapse-handler Execution Time") 649 end_time_last = time.time() /databricks/python/lib/python3.8/site-packages/adh_spark_data_processor/deltalake_to_sql_incremental_load_processor.py in initiate_merge_silver_to_gold(self, sparksession, source_silver_table, drop_silver_system_columns, primary_key_list) 506 return True 507 except Exception as e: --> 508 raise Exception(f"Exception in processRequest: {e}") 509 510 Exception: Exception in processRequest: An error occurred while calling o1740.save. : com.databricks.spark.sqldw.SqlDWSideException: Azure Synapse Analytics failed to execute the JDBC query produced by the connector. Underlying SQLException(s): - com.microsoft.sqlserver.jdbc.SQLServerException: Failed to classify the current request into a workload group. [ErrorCode = 110813] [SQLState = S0001] at com.databricks.spark.sqldw.Utils$.wrapExceptions(Utils.scala:732) at com.databricks.spark.sqldw.DefaultSource.createRelation(DefaultSource.scala:89) at org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:48) at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:75) at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:73) at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:96) at org.apache.spark.sql.execution.SparkPlan.$anonfun$execute$1(SparkPlan.scala:213) at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:257) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:165) at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:253) at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:209) at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:167) at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:166) at org.apache.spark.sql.DataFrameWriter.$anonfun$runCommand$1(DataFrameWriter.scala:1080) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withCustomExecutionEnv$5(SQLExecution.scala:156) at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:299) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withCustomExecutionEnv$1(SQLExecution.scala:130) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:854) at org.apache.spark.sql.execution.SQLExecution$.withCustomExecutionEnv(SQLExecution.scala:103) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:249) at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:1080) at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:469) at org.apache.spark.sql.DataFrameWriter.saveInternal(DataFrameWriter.scala:439) at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:312) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:380) at py4j.Gateway.invoke(Gateway.java:295) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.GatewayConnection.run(GatewayConnection.java:251) at java.lang.Thread.run(Thread.java:748) Caused by: java.sql.SQLException: Exception thrown in awaitResult: at com.databricks.spark.sqldw.JDBCWrapper.executeInterruptibly(SqlDWJDBCWrapper.scala:137) at com.databricks.spark.sqldw.JDBCWrapper.$anonfun$executeQueryInterruptibly$1(SqlDWJDBCWrapper.scala:105) at com.databricks.spark.sqldw.JDBCWrapper.withPreparedStatement(SqlDWJDBCWrapper.scala:357) at com.databricks.spark.sqldw.JDBCWrapper.executeQueryInterruptibly(SqlDWJDBCWrapper.scala:104) at com.databricks.spark.sqldw.DefaultSource.$anonfun$validateJdbcConnection$1(DefaultSource.scala:148) at com.databricks.spark.sqldw.DefaultSource.$anonfun$validateJdbcConnection$1$adapted(DefaultSource.scala:146) at com.databricks.spark.sqldw.JDBCWrapper.withConnection(SqlDWJDBCWrapper.scala:335) at com.databricks.spark.sqldw.DefaultSource.validateJdbcConnection(DefaultSource.scala:146) at com.databricks.spark.sqldw.DefaultSource.$anonfun$createRelation$3(DefaultSource.scala:92) at com.databricks.spark.sqldw.Utils$.wrapExceptions(Utils.scala:701) ... 34 more Caused by: com.microsoft.sqlserver.jdbc.SQLServerException: Failed to classify the current request into a workload group. at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:262) at com.microsoft.sqlserver.jdbc.SQLServerStatement.getNextResult(SQLServerStatement.java:1632) at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.doExecutePreparedStatement(SQLServerPreparedStatement.java:602) at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement$PrepStmtExecCmd.doExecute(SQLServerPreparedStatement.java:524) at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:7418) at com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:3272) at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeCommand(SQLServerStatement.java:247) at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeStatement(SQLServerStatement.java:222) at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.executeQuery(SQLServerPreparedStatement.java:446) at com.databricks.spark.sqldw.JDBCWrapper.$anonfun$executeQueryInterruptibly$2(SqlDWJDBCWrapper.scala:105) at com.databricks.spark.sqldw.JDBCWrapper.$anonfun$executeInterruptibly$3(SqlDWJDBCWrapper.scala:129) at scala.concurrent.Future$.$anonfun$apply$1(Future.scala:659) at scala.util.Success.$anonfun$map$1(Try.scala:255) at scala.util.Success.map(Try.scala:213) at scala.concurrent.Future.$anonfun$map$1(Future.scala:292) at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:33) at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33) at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ... 1 more [workloadclassificationerror.txt](https://github.com/MicrosoftDocs/azure-docs/files/10078808/workloadclassificationerror.txt) --- #### Document Details ⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.* * ID: da9820f3-e186-4f52-7492-cb936823ea80 * Version Independent ID: 2746ca67-f71e-85c6-f5b3-0f828bcbbc68 * Content: [Tutorial: Get started analyze data with dedicated SQL pools - Azure Synapse Analytics](https://learn.microsoft.com/en-us/azure/synapse-analytics/get-started-analyze-sql-pool) * Content Source: [articles/synapse-analytics/get-started-analyze-sql-pool.md](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/synapse-analytics/get-started-analyze-sql-pool.md) * Service: **synapse-analytics** * Sub-service: **sql** * GitHub Login: @saveenr * Microsoft Alias: **saveenr**
YashikaTyagii commented 1 year ago

@MohammedL365 Thanks for your feedback! We will investigate and update as appropriate.

RamanathanChinnappan-MSFT commented 1 year ago

@saveenr

could you please review this and add your comments, update as appropriate.

RamanathanChinnappan-MSFT commented 1 year ago

@MohammedL365

Thank you for bringing this to our attention. I've assigned this issue to the author who will investigate and update as appropriate.

MohammedL365 commented 1 year ago

Thanks @RamanathanChinnappan-MSFT , @saveenr looking forward to work with you on this one to understand the root cause and possible resolution.

MohammedL365 commented 1 year ago

Hi @saveenr looking forward for your update on this issue. I continue to get this error intermittently which causes our jobs to fail ...

bandersmsft commented 5 months ago

Thanks for your dedication to our documentation. Unfortunately, at this time we have been unable to review your issue in a timely manner and we sincerely apologize for the delayed response. We are closing this issue for now, but if you feel that it's still a concern, please respond and let us know. If you determine another possible update to our documentation, please don't hesitate to reach out again. #please-close