databricks / databricks-sdk-py

Databricks SDK for Python (Beta)
https://databricks-sdk-py.readthedocs.io/
Apache License 2.0
370 stars 124 forks source link

[ISSUE] `Internal Server Error` when creating an alert with a parent folder #650

Closed sodle-splunk closed 5 months ago

sodle-splunk commented 6 months ago

Description

When attempting to create an alert (client.alerts.create), specifying the parent argument always raises an Internal Server Error.

Reproduction

from databricks.sdk import WorkspaceClient
from databricks.sdk.service.sql import AlertOptions

client = WorkspaceClient(host="https://dbc-redacted.cloud.databricks.com", token="redacted")
client.alerts.create(
  name="test",
  query_id="redacted",
  options=AlertOptions(column="average_fare", op=">", value=10),
  parent="folders/HOME/testfolder",
)

testfolder exists under my home directory. This same script works when omitting the parent parameter, but creates the alert in the default location (home) instead.

Expected behavior Alert should be created in testfolder under my home directory.

Is it a regression? I believe this used to work on the current SDK version, when I tried it a few days ago, but seems not to work know.

Debug Logs

DEBUG:urllib3.connectionpool:Resetting dropped connection: dbc-redacted.cloud.databricks.com
DEBUG:urllib3.connectionpool:https://dbc-redacted.cloud.databricks.com:443 "POST /api/2.0/preview/sql/alerts HTTP/1.1" 500 None
DEBUG:databricks.sdk:POST /api/2.0/preview/sql/alerts
> {
>   "name": "test",
>   "options": {
>     "column": "average_fare",
>     "op": ">",
>     "value": "**REDACTED**"
>   },
>   "parent": "folders/HOME/testfolder",
>   "query_id": "redacted"
> }
< 500 Internal Server Error
< {
<   "message": "Internal Server Error"
< }
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/Users/sodle/.pyenv/versions/c_databricks/lib/python3.9/site-packages/databricks/sdk/service/sql.py", line 4002, in create
    res = self._api.do('POST', '/api/2.0/preview/sql/alerts', body=body, headers=headers)
  File "/Users/sodle/.pyenv/versions/c_databricks/lib/python3.9/site-packages/databricks/sdk/core.py", line 130, in do
    response = retryable(self._perform)(method,
  File "/Users/sodle/.pyenv/versions/c_databricks/lib/python3.9/site-packages/databricks/sdk/retries.py", line 54, in wrapper
    raise err
  File "/Users/sodle/.pyenv/versions/c_databricks/lib/python3.9/site-packages/databricks/sdk/retries.py", line 33, in wrapper
    return func(*args, **kwargs)
  File "/Users/sodle/.pyenv/versions/c_databricks/lib/python3.9/site-packages/databricks/sdk/core.py", line 238, in _perform
    raise self._make_nicer_error(response=response, **payload) from None
databricks.sdk.errors.platform.InternalError: Internal Server Error

Other Information

tanmay-db commented 6 months ago

Hi @sodle-splunk, thanks for raising the issue, it seems to be coming due to backend error and most likely not something due to sdk. Can you please let me know if this persists? Generally these type of issues are transient. I have raised this with internal team.

Also, can you try making this request through databricks cli for example? If there is the same error response then we can confirm if this isn't local to the sdk.

sodle-splunk commented 6 months ago

Hi @tanmay-db. I'm still getting the error today through SDK, and have been since at least Friday.

The following CLI command also fails with an Internal Server Error:

databricks alerts create --json '{
  "name": "test",
  "query_id": "redacted",
  "options": {"column": "average_fare", "op": ">", "value": 10},
  "parent": "folders/HOME/testfolder"
}'

Again, omitting the parent parameter allows it to succeed.

POSTing that same JSON payload to https://dbc-redacted.cloud.databricks.com/api/2.0/preview/sql/alerts via Curl or Hoppscotch gives the same results.

So I'm definitely suspecting something wrong server-side.

sodle-splunk commented 6 months ago

Creating an alert in a folder via the UI is working for me, though the network tab in the browser shows that to be using a different API route (/sql/api/alerts) and slightly different payload.

Correct me if I'm wrong, but the folder I'm looking at in Databricks seem to be addressable in three different ways:

This third ID is what the browser specifies as the parent when creating the alert. I've just confirmed that passing this value to the SDK also works, where the other two trigger the internal server error.

It seems to me like the API should be able to resolve a path into this ID, but is failing to do so.

sodle-splunk commented 5 months ago

Hi @tanmay-db - Has there been any progress in investigating this? Thanks.

tanmay-db commented 5 months ago

Hi @sodle-splunk, thanks for the detailed response, let me sync with the internal team (there is a holiday in US today so it might take tomorrow for them to reply).

I'm still getting the error today through SDK, and have been since at least Friday. As you mentioned this doesn't seem to be transient and most likely something on the server side.

Just to check, are you blocked on this issue at the moment or is there a workaround available?

sodle-splunk commented 5 months ago

Thanks for the response @tanmay-db

We are still blocked on this, as there is no workaround. We were also out yesterday due to the holiday, but we're back today.

tanmay-db commented 5 months ago

Hi @sodle-splunk, the parent field takes a node id, so specifying "folders/HOME/testfolder" will fail, you would have to specify the node id of the testfolder which you can get using this API: https://docs.databricks.com/api/workspace/workspace/getstatus.

Thanks to @xiangzhu-databricks for the clarification.

sodle-splunk commented 5 months ago

Ok. Thanks for clarifying. We will work around this.

Perhaps the requirement for a "node id" instead of a "folder path" should be more clearly documented?