Open vcakdwivedi opened 10 months ago
I think I'm having a similar issue, but only on a certain type of environment. In WSL 1 and Compute Instances on Azure Machine Learning, it's working. But on Compute Clusters in Azure Machine Learning I have an issue.
@vcakdwivedi is it possible to share some logs with more logging enabled? Here is how I enable more logs.
import logging
import contextlib
from http.client import HTTPConnection
def debug_requests_on():
'''Switches on logging of the requests module.'''
HTTPConnection.debuglevel = 1
logging.basicConfig()
logging.getLogger().setLevel(logging.DEBUG)
requests_log = logging.getLogger("requests.packages.urllib3")
requests_log.setLevel(logging.DEBUG)
requests_log.propagate = True
def debug_requests_off():
'''Switches off logging of the requests module, might be some side-effects'''
HTTPConnection.debuglevel = 0
root_logger = logging.getLogger()
root_logger.setLevel(logging.WARNING)
root_logger.handlers = []
requests_log = logging.getLogger("requests.packages.urllib3")
requests_log.setLevel(logging.WARNING)
requests_log.propagate = False
@contextlib.contextmanager
def debug_requests():
'''Use with 'with'!'''
debug_requests_on()
yield
debug_requests_off()
debug_requests_on()
# Insert code to connect to databricks SQL Warehouse
Be extra carefull, it shows every single tokens from Azure or PAT from Databricks, you might want to remove them if you post the log.
Here is what I without the logger:
2024-01-23 08:46:12,704 - thrift_backend - Error during request to server: {"method": "OpenSession", "session-id": null, "query-id": null, "http-code": null, "error-message": "", "original-exception": "Retry request would exceed Retry policy max retry duration of 900.0 seconds", "no-retry-reason": "non-retryable error", "bounded-retry-delay": null, "attempt": "1/30", "elapsed-seconds": "846.7760233879089/900.0"} Traceback (most recent call last): File "/opt/miniconda/lib/python3.10/site-packages/databricks/sql/init.py", line 51, in connect return Connection(server_hostname, http_path, access_token, **kwargs) File "/opt/miniconda/lib/python3.10/site-packages/databricks/sql/client.py", line 235, in init self._open_session_resp = self.thrift_backend.open_session( File "/opt/miniconda/lib/python3.10/site-packages/databricks/sql/thrift_backend.py", line 576, in open_session response = self.make_request(self._client.OpenSession, open_session_req) File "/opt/miniconda/lib/python3.10/site-packages/databricks/sql/thrift_backend.py", line 505, in make_request self._handle_request_error(error_info, attempt, elapsed) File "/opt/miniconda/lib/python3.10/site-packages/databricks/sql/thrift_backend.py", line 335, in _handle_request_error raise network_request_error databricks.sql.exc.RequestError: Error during request to server
And with the extra logging:
INFO:azure.identity._credentials.environment:No environment configuration found. 2024-01-26 09:14:22,751 - environment - No environment configuration found. INFO:azure.identity._credentials.managed_identity:ManagedIdentityCredential will use Azure ML managed identity 2024-01-26 09:14:22,751 - managed_identity - ManagedIdentityCredential will use Azure ML managed identity INFO:azure.identity._credentials.chained:DefaultAzureCredential acquired a token from ManagedIdentityCredential 2024-01-26 09:14:22,894 - chained - DefaultAzureCredential acquired a token from ManagedIdentityCredential send: b'GET /MSI/token?api-version=2017-09-01&resource=xxxxxxxxxx&clientid=xxxxxxxxxxxx HTTP/1.1\r\nHost: localhost:46809\r\nUser-Agent: azsdk-python-identity/1.15.0 Python/3.10.13 (Linux-5.15.0-1053-azure-x86_64-with-glibc2.35)\r\nAccept-Encoding: gzip, deflate, br, zstd\r\nAccept: /\r\nConnection: keep-alive\r\nsecret: xxxxxxxxxxxxxxxxxxxxx\r\n\r\n' reply: 'HTTP/1.1 200 OK\r\n' header: Content-Type: application/json header: Date: Fri, 26 Jan 2024 09:14:22 GMT header: Content-Length: 1679 send: b'POST /api/2.0/token/create HTTP/1.1\r\nHost: adb-my-host.azuredatabricks.net\r\nUser-Agent: python-requests/2.31.0\r\nAccept-Encoding: gzip, deflate, br, zstd\r\nAccept: /\r\nConnection: keep-alive\r\nContent-type: application/json\r\nAuthorization: Bearer eyJ0eXAiOiJKV1xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\r\nContent-Length: 78\r\n\r\n' send: b'{"comment": "Token created from python code", "lifetime_seconds": 86400}' reply: 'HTTP/1.1 200 OK\r\n' header: server: databricks header: x-databricks-reason-phrase: OK header: x-xss-protection: 1; mode=block header: date: Fri, 26 Jan 2024 09:14:23 GMT header: x-frame-options: SAMEORIGIN header: strict-transport-security: max-age=31536000; includeSubDomains; preload header: x-content-type-options: nosniff header: x-databricks-server-name: cons-webapp-44 header: strict-transport-security: max-age=31536000; includeSubDomains; preload header: x-content-type-options: nosniff header: content-type: application/json header: x-databricks-org-id: 5428309926879282 header: content-encoding: gzip header: vary: Accept-Encoding header: strict-transport-security: max-age=31536000; includeSubDomains; preload header: x-content-type-options: nosniff header: transfer-encoding: chunked
So here it confirms that I have a created a token, and this Service Principal user is in group with all rights ticked on. (sql + cluster usage/modification/deletion)
send: b'POST /sql/1.0/warehouses/dcf9951cfebb7d84 HTTP/1.1\r\nHost: adb-my-host.azuredatabricks.net\r\nAccept-Encoding: identity\r\nContent-Type: application/x-thrift\r\nContent-Length: 118\r\nUser-Agent: PyDatabricksSqlConnector/3.0.1\r\nAuthorization: Bearer
\r\n\r\n' send: b'\x80\x01\x00\x01\x00\x00\x00\x0bOpenSession\x00\x00\x00\x00\x0c\x00\x01\r\x00\x04\x0b\x0b\x00\x00\x00\x01\x00\x00\x005spark.thriftserver.arrowBasedRowSet.timestampAsString\x00\x00\x00\x05false\n\x05\x02\x00\x00\x00\x00\x00\x00\xa5\x07\x02\x05\x05\x01\x00\x00' reply: 'HTTP/1.1 403 Forbidden\r\n' header: date: Fri, 26 Jan 2024 09:14:23 GMT header: x-frame-options: SAMEORIGIN header: x-xss-protection: 1; mode=block header: x-content-type-options: nosniff header: strict-transport-security: max-age=31536000; includeSubDomains; preload header: content-type: application/x-thrift header: x-thriftserver-error-message: PERMISSION_DENIED: is not authorized to use this SQL Endpoint. Please contact your administrator. header: x-databricks-org-id: xxxxxxxxxxxx header: strict-transport-security: max-age=31536000; includeSubDomains; preload header: x-content-type-options: nosniff header: server: databricks header: transfer-encoding: chunked
And this error loops for 900 seconds until I get the first error log on OpenSession
I will try to roll back to check if previous version works.
@Uranium2 thanks for the details. what version of databricks-sql-connector
do you have installed? can you share a snippet of code that actually makes the connection so we can investigate?
I am using this version: databricks-sql-connector==3.0.1
I found out that I needed to tick some extra autorisation on the databricks SQL warehouse Autorisation, which is strange because my other managed Identity on compute Instance does not have this explicit autorisation and is not in admin group.
Both of my Service Principal are in this aml-compute-group
.
Maybe something changed in handling of errors, because I remember before that if my token was not valid, I had an explicit error message and was also instant. Here I had to wait 900 seconds to get a generic error. Not very practical
I'm also having issues with Databricks Clusters with PySpark, but it's not in the same Repo. (Maybe it has a dependecy with this, I don't know)
100% agree that if you get a 403 on the response this shouldn't have to time out. I'm investigating to see if I can reproduce.
Can you enable debug logging and share what you see when this fails?
To do so you can add the following to the top of whatever script you run that invokes databricks-sql-connector
:
import logging
logger = logging.getLogger("databricks.sql")
logger.setLevel(logging.DEBUG)
on adding logging I am getting this? Is there any workaround for this issue: I am using below packages: dependencies:
1 items cleaning up...
Cleanup took 0.062397003173828125 seconds
Traceback (most recent call last):
File "extract.py", line 76, in
There's certainly a workaround, but we need to isolate what's actually causing this exception. What you provided is a stack trace, not logs. Please see this comment for how to enable logging. I won't be able to help further without this information, given that what you've provided so far is a generic error and could have any number of causes.
@susodapop Hello, here is the log with the debug info from databricks.sql
and also requests.packages.urllib3
.
2024-01-29 08:46:45,138 - chained - DefaultAzureCredential acquired a token from ManagedIdentityCredential DEBUG:databricks.sql.thrift_backend:retry parameter: _retry_delay_min given_or_default 1.0 DEBUG:databricks.sql.thrift_backend:retry parameter: _retry_delay_max given_or_default 60.0 DEBUG:databricks.sql.thrift_backend:retry parameter: _retry_stop_after_attempts_count given_or_default 30 DEBUG:databricks.sql.thrift_backend:retry parameter: _retry_stop_after_attempts_duration given_or_default 900.0 DEBUG:databricks.sql.thrift_backend:retry parameter: _retry_delay_default given_or_default 5.0 DEBUG:databricks.sql.thrift_backend:Sending request: OpenSession(\
) DEBUG:databricks.sql.auth.retry:This request should be retried: Other DEBUG:databricks.sql.auth.retry:Failed requests are retried by default per configured DatabricksRetryPolicy DEBUG:databricks.sql.auth.retry:This request should be retried: Other DEBUG:databricks.sql.auth.retry:Failed requests are retried by default per configured DatabricksRetryPolicy DEBUG:databricks.sql.auth.retry:This request should be retried: Other DEBUG:databricks.sql.auth.retry:Failed requests are retried by default per configured DatabricksRetryPolicy send: b'GET /MSI/token?api-version=2017-09-01&resource=\ clientid=xxxx-xxxxx-xxxxx-xxxxxxxxxx HTTP/1.1\r\nHost: localhost:46809\r\nUser-Agent: azsdk-python-identity/1.15.0 Python/3.10.13 (Linux-5.15.0-1053-azure-x86_64-with-glibc2.35)\r\nAccept-Encoding: gzip, deflate, br, zstd\r\nAccept: /\r\nConnection: keep-alive\r\nsecret: LktX7fbBcqArI7HLzKna\r\n\r\n' reply: 'HTTP/1.1 200 OK\r\n' header: Content-Type: application/json header: Date: Mon, 29 Jan 2024 08:46:45 GMT header: Content-Length: 1679 send: b'POST /api/2.0/token/create HTTP/1.1\r\nHost: adb-\ .2.azuredatabricks.net\r\nUser-Agent: python-requests/2.31.0\r\nAccept-Encoding: gzip, deflate, br, zstd\r\nAccept: /\r\nConnection: keep-alive\r\nContent-type: application/json\r\nAuthorization: Bearer \ \r\nContent-Length: 78\r\n\r\n' send: b'{"comment": "Token created from framework", "lifetime_seconds": 86400}' reply: 'HTTP/1.1 200 OK\r\n' header: server: databricks header: x-databricks-reason-phrase: OK header: date: Mon, 29 Jan 2024 08:46:45 GMT header: strict-transport-security: max-age=31536000; includeSubDomains; preload header: x-databricks-server-name: cons-webapp-49 header: x-frame-options: SAMEORIGIN header: x-content-type-options: nosniff header: x-xss-protection: 1; mode=block header: strict-transport-security: max-age=31536000; includeSubDomains; preload header: x-content-type-options: nosniff header: content-type: application/json header: x-databricks-org-id: \ header: content-encoding: gzip header: vary: Accept-Encoding header: strict-transport-security: max-age=31536000; includeSubDomains; preload header: x-content-type-options: nosniff header: transfer-encoding: chunked send: b'POST /sql/1.0/warehouses/\ HTTP/1.1\r\nHost: adb-\ .2.azuredatabricks.net\r\nAccept-Encoding: identity\r\nContent-Type: application/x-thrift\r\nContent-Length: 118\r\nUser-Agent: PyDatabricksSqlConnector/3.0.2\r\nAuthorization: Bearer \ \r\n\r\n' send: b'\x80\x01\x00\x01\x00\x00\x00\x0bOpenSession\x00\x00\x00\x00\x0c\x00\x01\r\x00\x04\x0b\x0b\x00\x00\x00\x01\x00\x00\x005spark.thriftserver.arrowBasedRowSet.timestampAsString\x00\x00\x00\x05false\n\x05\x02\x00\x00\x00\x00\x00\x00\xa5\x07\x02\x05\x05\x01\x00\x00' reply: 'HTTP/1.1 403 Forbidden\r\n' header: date: Mon, 29 Jan 2024 08:46:46 GMT header: x-frame-options: SAMEORIGIN header: x-xss-protection: 1; mode=block header: x-content-type-options: nosniff header: strict-transport-security: max-age=31536000; includeSubDomains; preload header: content-type: application/x-thrift header: x-thriftserver-error-message: PERMISSION_DENIED: xxxx-xxxxx-xxxxx-xxxxxxxxxx is not authorized to use this SQL Endpoint. Please contact your administrator. header: x-databricks-org-id: \ header: strict-transport-security: max-age=31536000; includeSubDomains; preload header: x-content-type-options: nosniff header: server: databricks header: transfer-encoding: chunked
Here is the role of the group in Databricks:
And here is the autorization of the specific SQL Warehouse:
I find it weird that the group with the SP from compute instance and cluster does not react the same.
Here is the SP of compute cluster: Here is the SP of the compute Instance: Both in the same group:
I have the same issue raise network_request_error databricks.sql.exc.RequestError: Error during request to server
A RequestError can have any number of causes. Without logs there's no way for us to debug it further.
On Tue, Jan 30, 2024 at 11:02 AM chkp-orsaa @.***> wrote:
I have the same issue raise network_request_error databricks.sql.exc.RequestError: Error during request to server
— Reply to this email directly, view it on GitHub https://github.com/databricks/databricks-sql-python/issues/333#issuecomment-1917353387, or unsubscribe https://github.com/notifications/unsubscribe-auth/AECG7BYVGXL3V7UHVIKBSFDYREKRXAVCNFSM6AAAAABCLVV3A6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSMJXGM2TGMZYG4 . You are receiving this because you were mentioned.Message ID: @.***>
Client authentication failed - which parameter is referenced to that? I use service principal authentication and receive an access token credential = ClientSecretCredential( tenant_id=tenant_id, client_id=client_id, client_secret=client_secret ) token = credential.get_token(scope)
retry parameter: _retry_delay_min given_or_default 1.0 retry parameter: _retry_delay_max given_or_default 60.0 retry parameter: _retry_stop_after_attempts_count given_or_default 30 retry parameter: _retry_stop_after_attempts_duration given_or_default 900.0 retry parameter: _retry_delay_default given_or_default 5.0 Sending request: TOpenSessionReq(client_protocol=None, username=None, password=None, configuration={'spark.thriftserver.arrowBasedRowSet.timestampAsString': 'false'}, getInfos=None, client_protocol_i64=42247, connectionProperties=None, initialNamespace=None, canUseMultipleCatalogs=True, sessionId=None) Error during request to server: {"method": "OpenSession", "session-id": null, "query-id": null, "http-code": null, "error-message": "", "original-exception": "invalid_client: Client authentication failed", "no-retry-reason": "non-retryable error", "bounded-retry-delay": null, "attempt": "1/30", "elapsed-seconds": "0.3561375141143799/900.0"}
Same error here (v3.0.1):
retry parameter: _retry_delay_min given_or_default 1.0
retry parameter: _retry_delay_max given_or_default 60.0
retry parameter: _retry_stop_after_attempts_count given_or_default 30
retry parameter: _retry_stop_after_attempts_duration given_or_default 900.0
retry parameter: _retry_delay_default given_or_default 5.0
Sending request: OpenSession(<REDACTED>)
Received response: TOpenSessionResp(<REDACTED>)
Sending request: ExecuteStatement(<REDACTED>)
Error during request to server: {"method": "ExecuteStatement", "session-id": <REDACTED>, "query-id": null, "http-code": 200, "error-message": "", "original-exception": "ExecuteStatement command can only be retried for codes 429 and 503", "no-retry-reason": "non-retryable error", "bounded-retry-delay": null, "attempt": "1/30", "elapsed-seconds": "0.4073357582092285/900.0"}
How to reproduce it? Using a native query with a big number of positional arguments (?), for example 1000.
If we send these arguments as parameters we have this error.
However, if we send the query with the parameters already included (only a string), it works.
In addition, I tested the same query, with the same number of parameters (241) with version 2.8.0 and it works.
The only difference is in 2.8.0 we build the query as
(table.column1_id = %d AND table.column2_id = %d)
instead of
(table.column1_id = ? AND table.column2_id = ?)
Same parameters used.
invalid_client: Client authentication failed
cc @jackyhu-db
Same error here:
retry parameter: _retry_delay_min given_or_default 1.0 retry parameter: _retry_delay_max given_or_default 60.0 retry parameter: _retry_stop_after_attempts_count given_or_default 30 retry parameter: _retry_stop_after_attempts_duration given_or_default 900.0 retry parameter: _retry_delay_default given_or_default 5.0 Sending request: OpenSession(<REDACTED>) Received response: TOpenSessionResp(<REDACTED>) Sending request: ExecuteStatement(<REDACTED>) Error during request to server: {"method": "ExecuteStatement", "session-id": <REDACTED>, "query-id": null, "http-code": 200, "error-message": "", "original-exception": "ExecuteStatement command can only be retried for codes 429 and 503", "no-retry-reason": "non-retryable error", "bounded-retry-delay": null, "attempt": "1/30", "elapsed-seconds": "0.4073357582092285/900.0"}
How to reproduce it? Using a native query with a big number of positional arguments (?), for example 1000. If we send these arguments as parameters we have this error. However, if we send the query with the parameters already included (only a string), it works. In addition, I tested the same query, with the same number of parameters (241) with version 2.8.0 and it works. The only difference is in 2.8.0 we build the query as
(table.column1_id = %d AND table.column2_id = %d)
instead of(table.column1_id = ? AND table.column2_id = ?)
Same parameters used.
There is a limit to the number of parameters allowed in native query mode. 1000 is too large. Would you share your use case? Do you have multiple SQL statement or a single SQL query can involve 1000 parameters?
Same error here:
retry parameter: _retry_delay_min given_or_default 1.0 retry parameter: _retry_delay_max given_or_default 60.0 retry parameter: _retry_stop_after_attempts_count given_or_default 30 retry parameter: _retry_stop_after_attempts_duration given_or_default 900.0 retry parameter: _retry_delay_default given_or_default 5.0 Sending request: OpenSession(<REDACTED>) Received response: TOpenSessionResp(<REDACTED>) Sending request: ExecuteStatement(<REDACTED>) Error during request to server: {"method": "ExecuteStatement", "session-id": <REDACTED>, "query-id": null, "http-code": 200, "error-message": "", "original-exception": "ExecuteStatement command can only be retried for codes 429 and 503", "no-retry-reason": "non-retryable error", "bounded-retry-delay": null, "attempt": "1/30", "elapsed-seconds": "0.4073357582092285/900.0"}
How to reproduce it? Using a native query with a big number of positional arguments (?), for example 1000. If we send these arguments as parameters we have this error. However, if we send the query with the parameters already included (only a string), it works. In addition, I tested the same query, with the same number of parameters (241) with version 2.8.0 and it works. The only difference is in 2.8.0 we build the query as
(table.column1_id = %d AND table.column2_id = %d)
instead of(table.column1_id = ? AND table.column2_id = ?)
Same parameters used.There is a limit to the number of parameters allowed in native query mode. 1000 is too large. Would you share your use case? Do you have multiple SQL statement or a single SQL query can involve 1000 parameters?
Oh, then we would appreciate if the documentation explicitly mention what is this limit, and if you recommend not to use it for composed queries.
Basically what we are doing is to compute the query based on another query, from another DB.
So, in DB1 (not databricks, but it could be there too) we are doing
SELECT val1, val2 FROM table_with_ids;
And then, we create the following query for Databricks:
or_clause = []
params = []
for val1, val2 in query1.values():
or_clause.append("(table.column1_id = ? AND table.column2_id = ?)")
params.append(val1)
params.append(val2)
...
cursor.execute(f"SELECT * FROM table WHERE ({' OR ',join(or_clause)}) AND column3 = true", params)
Our code is more complex but this is basically what we do. Do you recommend us to use the legacy pyformat? It clearly says it will be deprecated.
there is a way around it - fallback to the old inline mode (which basically substitutes using a string replacement). it's an option in the release note. but it gives you false sense of security (not really counter sql injection). it can be an understandable stop gap.
Yunbo
On Fri, Feb 2, 2024 at 12:27 AM Alvaro @.***> wrote:
Same error here:
retry parameter: _retry_delay_min given_or_default 1.0 retry parameter: _retry_delay_max given_or_default 60.0 retry parameter: _retry_stop_after_attempts_count given_or_default 30 retry parameter: _retry_stop_after_attempts_duration given_or_default 900.0 retry parameter: _retry_delay_default given_or_default 5.0 Sending request: OpenSession(
) Received response: TOpenSessionResp( ) Sending request: ExecuteStatement( ) Error during request to server: {"method": "ExecuteStatement", "session-id": , "query-id": null, "http-code": 200, "error-message": "", "original-exception": "ExecuteStatement command can only be retried for codes 429 and 503", "no-retry-reason": "non-retryable error", "bounded-retry-delay": null, "attempt": "1/30", "elapsed-seconds": "0.4073357582092285/900.0"} How to reproduce it? Using a native query with a big number of positional arguments (?), for example 1000. If we send these arguments as parameters we have this error. However, if we send the query with the parameters already included (only a string), it works. In addition, I tested the same query, with the same number of parameters (241) with version 2.8.0 and it works. The only difference is in 2.8.0 we build the query as (table.column1_id = %d AND table.column2_id = %d) instead of (table.column1_id = ? AND table.column2_id = ?) Same parameters used.
There is a limit to the number of parameters allowed in native query mode. 1000 is too large. Would you share your use case? Do you have multiple SQL statement or a single SQL query can involve 1000 parameters?
Oh, then we would appreciate if the documentation explicitly mention what is this limit, and if you recommend not to use it for composed queries. Basically what we are doing is to compute the query based on another query, from another DB. So, in DB1 (not databricks, but it could be there too) we are doing SELECT val1, val2 FROM table_with_ids; And then, we create the following query for Databricks:
or_clause = [] params = [] for val1, val2 in query1.values(): or_clause.append("(table.column1_id = ? AND table.column2_id = ?)") params.append(val1) params.append(val2)
...
cursor.execute(f"SELECT * FROM table WHERE ({' OR ',join(or_clause)}) AND column3 = true", params)
Our code is more complex but this is basically what we do. Do you recommend us to use the legacy pyformat https://github.com/databricks/databricks-sql-python/blob/main/docs/parameters.md#legacy-pyformat-paramstyle-usage-example? It clearly says it will be deprecated.
— Reply to this email directly, view it on GitHub https://github.com/databricks/databricks-sql-python/issues/333#issuecomment-1923312693, or unsubscribe https://github.com/notifications/unsubscribe-auth/AY7BOD7ZQM2NDPJKXZSM44DYRSPN5AVCNFSM6AAAAABCLVV3A6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSMRTGMYTENRZGM . You are receiving this because you commented.Message ID: @.***>
Thanks Yunbo. Yes, that's the workaround we have taken temporary but ideally this is addressed in next releases so we can prevent SQL injection. Alvaro
main(server_hostname = args.server_hostname, http_path = args.http_path, schema = args.schema, access_token = args.access_token) File "extract.py", line 24, in main dbx_extractor.execute_sql(sql_query, server_hostname, http_path, access_token) File "/mnt/azureml/cr/j/2c91464c9e0743f5860e8f48b407683d/exe/wd/dbextractor.py", line 23, in execute_sql with db_sql.connect( File "/azureml-envs/azureml_9806bfc7c716eaf0df12bea8512e961b/lib/python3.8/site-packages/databricks/sql/init.py", line 50, in connect return Connection(server_hostname, http_path, access_token, **kwargs) File "/azureml-envs/azureml_9806bfc7c716eaf0df12bea8512e961b/lib/python3.8/site-packages/databricks/sql/client.py", line 189, in init self._session_handle = self.thrift_backend.open_session( File "/azureml-envs/azureml_9806bfc7c716eaf0df12bea8512e961b/lib/python3.8/site-packages/databricks/sql/thrift_backend.py", line 464, in open_session response = self.make_request(self._client.OpenSession, open_session_req) File "/azureml-envs/azureml_9806bfc7c716eaf0df12bea8512e961b/lib/python3.8/site-packages/databricks/sql/thrift_backend.py", line 393, in make_request self._handle_request_error(error_info, attempt, elapsed) File "/azureml-envs/azureml_9806bfc7c716eaf0df12bea8512e961b/lib/python3.8/site-packages/databricks/sql/thrift_backend.py", line 261, in _handle_request_error raise network_request_error databricks.sql.exc.RequestError: Error during request to server