elastic / kibana

Your window into the Elastic Stack
https://www.elastic.co/products/kibana
Other
19.82k stars 8.2k forks source link

8.2.0 BC1: "Bad Request" error when searching saved objects #129424

Closed richkuz closed 2 years ago

richkuz commented 2 years ago

Kibana version: 8.2.0 BC1

Elasticsearch version: 8.2.0 BC1

Server OS version: (Cloud)

Browser version: Chrome Version 99.0.4844.84 (Official Build) (x86_64)

Browser OS version: MacOS BigSur 11.6

Original install method (e.g. download page, yum, from source, etc.): Elastic Cloud, GCP Los Angeles us-west-2 region

Describe the bug:

I see a "Bad Request" error whenever I search saved objects in the Kibana stack management UI.

Steps to reproduce:

  1. Launch an 8.2.0 BC1 deployment. I used Elastic Cloud GCP Los Angeles (us-west-2) with the 8.2.0 (latest) version, corresponding to BC1 today.
  2. Navigate to Kibana, Stack Management, Saved Objects.
  3. Search Saved Objects for anything.

Observe an error in the UI about a Bad Request, Unable to Find Saved Objects. The search is not performed.

Expected behavior: Search should work.

Screenshots (if relevant): image

Errors in browser console (if relevant):

Failed to load resource: the server responded with a status of 400 ()

image

Requested URL:

https://rkuzsma-8-0-bc1-kibana-test.kb.us-west2.gcp.elastic-cloud.com:9243/api/kibana/management/saved_objects/_find?search=Any*&perPage=50&page=1&fields=id&type=config&type=url&type=index-pattern&type=query&type=tag&type=action&type=alert&type=graph-workspace&type=visualization&type=canvas-element&type=canvas-workpad&type=dashboard&type=search&type=lens&type=osquery-saved-query&type=osquery-pack&type=map&type=cases&type=uptime-dynamic-settings&type=synthetics-monitor&type=infrastructure-ui-source&type=metrics-explorer-view&type=inventory-view&type=apm-indices&sortField=type

Response:

{"statusCode":400,"error":"Bad Request","message":"all shards failed: search_phase_execution_exception: [query_shard_exception] Reason: failed to create query: Can only use phrase prefix queries on text fields - not on [synthetics-monitor.name] which is of type [keyword]"}
richkuz commented 2 years ago

Reproduced on 8.2.0 BC3 on Cloud.

elasticmachine commented 2 years ago

Pinging @elastic/kibana-core (Team:Core)

richkuz commented 2 years ago

@rashmivkulkarni here is the log on Kibana server logs:

"error": {
      "type": "Error",
      "message": "Bad Request",
      "stack_trace": "Error: Bad Request\n    at Function.createBadRequestError (/usr/share/kibana/src/core/server/saved_objects/service/lib/errors.js:80:36)\n    at SpacesSavedObjectsClient.openPointInTimeForType (/usr/share/kibana/x-pack/plugins/spaces/server/saved_objects/spaces_saved_objects_client.js:257:48)\n    at runMicrotasks (<anonymous>)\n    at processTicksAndRejections (node:internal/process/task_queues:96:5)\n    at PointInTimeFinder.open (/usr/share/kibana/src/core/server/saved_objects/service/lib/point_in_time_finder.js:148:11)\n    at PointInTimeFinder.find (/usr/share/kibana/src/core/server/saved_objects/service/lib/point_in_time_finder.js:103:5)\n    at getStats (/usr/share/kibana/src/plugins/vis_types/table/server/usage_collector/get_stats.js:57:20)\n    at /usr/share/kibana/src/plugins/usage_collection/server/collector/collector_set.js:153:26\n    at async Promise.all (index 35)\n    at CollectorSet.bulkFetch (/usr/share/kibana/src/plugins/usage_collection/server/collector/collector_set.js:139:25)\n    at CollectorSet.bulkFetchUsage (/usr/share/kibana/src/plugins/usage_collection/server/collector/collector_set.js:180:14)\n    at getUsage (/usr/share/kibana/src/plugins/usage_collection/server/routes/stats/stats.js:37:19)\n    at async Promise.all (index 0)\n    at /usr/share/kibana/src/plugins/usage_collection/server/routes/stats/stats.js:82:36\n    at Router.handle (/usr/share/kibana/src/core/server/http/router/router.js:163:30)\n    at handler (/usr/share/kibana/src/core/server/http/router/router.js:124:50)\n    at exports.Manager.execute (/usr/share/kibana/node_modules/@hapi/hapi/lib/toolkit.js:60:28)\n    at Object.internals.handler (/usr/share/kibana/node_modules/@hapi/hapi/lib/handler.js:46:20)\n    at exports.execute (/usr/share/kibana/node_modules/@hapi/hapi/lib/handler.js:31:20)\n    at Request._lifecycle (/usr/share/kibana/node_modules/@hapi/hapi/lib/request.js:371:32)\n    at Request._execute (/usr/share/kibana/node_modules/@hapi/hapi/lib/request.js:281:9)"
    },
rashmivkulkarni commented 2 years ago

Deployment id: 5207c514d46b4dea94040ec25e89cccc Region: us-west2 Build : 8.2.0 BC3

jportner commented 2 years ago

I confirmed I can reproduce this in 8.2 with a Kibana on Cloud, but not when running it locally.

@rashmivkulkarni here is the log on Kibana server logs:

"error": {
      "type": "Error",
      "message": "Bad Request",
      "stack_trace": "Error: Bad Request\n    at Function.createBadRequestError (/usr/share/kibana/src/core/server/saved_objects/service/lib/errors.js:80:36)\n    at SpacesSavedObjectsClient.openPointInTimeForType (/usr/share/kibana/x-pack/plugins/spaces/server/saved_objects/spaces_saved_objects_client.js:257:48)\n    at runMicrotasks (<anonymous>)\n    at processTicksAndRejections (node:internal/process/task_queues:96:5)\n    at PointInTimeFinder.open (/usr/share/kibana/src/core/server/saved_objects/service/lib/point_in_time_finder.js:148:11)\n    at PointInTimeFinder.find (/usr/share/kibana/src/core/server/saved_objects/service/lib/point_in_time_finder.js:103:5)\n    at getStats (/usr/share/kibana/src/plugins/vis_types/table/server/usage_collector/get_stats.js:57:20)\n    at /usr/share/kibana/src/plugins/usage_collection/server/collector/collector_set.js:153:26\n    at async Promise.all (index 35)\n    at CollectorSet.bulkFetch (/usr/share/kibana/src/plugins/usage_collection/server/collector/collector_set.js:139:25)\n    at CollectorSet.bulkFetchUsage (/usr/share/kibana/src/plugins/usage_collection/server/collector/collector_set.js:180:14)\n    at getUsage (/usr/share/kibana/src/plugins/usage_collection/server/routes/stats/stats.js:37:19)\n    at async Promise.all (index 0)\n    at /usr/share/kibana/src/plugins/usage_collection/server/routes/stats/stats.js:82:36\n    at Router.handle (/usr/share/kibana/src/core/server/http/router/router.js:163:30)\n    at handler (/usr/share/kibana/src/core/server/http/router/router.js:124:50)\n    at exports.Manager.execute (/usr/share/kibana/node_modules/@hapi/hapi/lib/toolkit.js:60:28)\n    at Object.internals.handler (/usr/share/kibana/node_modules/@hapi/hapi/lib/handler.js:46:20)\n    at exports.execute (/usr/share/kibana/node_modules/@hapi/hapi/lib/handler.js:31:20)\n    at Request._lifecycle (/usr/share/kibana/node_modules/@hapi/hapi/lib/request.js:371:32)\n    at Request._execute (/usr/share/kibana/node_modules/@hapi/hapi/lib/request.js:281:9)"
    },

I think that's a red herring that is unrelated to this problem, it's Metricbeat attempting to collect usage stats with an underprivileged user, see #120422.


What is the root cause of the error?

This problem according to the 400 response error message is: all shards failed: search_phase_execution_exception: [query_shard_exception] Reason: failed to create query: Can only use phrase prefix queries on text fields - not on [synthetics-monitor.name] which is of type [keyword]

The problem stems from the synthetics-monitor saved object that @elastic/uptime team has introduced starting in 8.2. Its index mapping specifies that its name field type is keyword.

The _find API for the saved object management page attempts to query the superset of all known searchable fields (title, name, etc.) for all saved object types.

So, unfortunately, because some other saved object type has previously designated defaultSearchField: 'name', the synthetics-monitor type is causing an ES error because its name field type is keyword (not text).

Why is this happening on Cloud, but not locally?

Here's the _find API request when running Kibana on Cloud: GET /api/kibana/management/saved_objects/_find?search=Any*&perPage=50&page=1&fields=id&type=config&type=url&type=index-pattern&type=query&type=tag&type=action&type=alert&type=graph-workspace&type=visualization&type=canvas-element&type=canvas-workpad&type=dashboard&type=search&type=lens&type=osquery-saved-query&type=osquery-pack&type=map&type=cases&type=uptime-dynamic-settings&type=synthetics-monitor&type=infrastructure-ui-source&type=metrics-explorer-view&type=inventory-view&type=apm-indices&sortField=type

Here's the _find API request when running Kibana locally: GET /api/kibana/management/saved_objects/_find?perPage=50&page=1&fields=id&type=config&type=url&type=index-pattern&type=action&type=query&type=alert&type=graph-workspace&type=tag&type=visualization&type=canvas-element&type=canvas-workpad&type=dashboard&type=search&type=lens&type=osquery-saved-query&type=osquery-pack&type=map&type=cases&type=uptime-dynamic-settings&type=infrastructure-ui-source&type=metrics-explorer-view&type=inventory-view&type=infrastructure-monitoring-log-view&type=apm-indices&sortField=type

Notice that Kibana only attempts to search for synthetics-monitor on Cloud, not locally. It appears that the Uptime plugin is enabled in both environments, but the Uptime plugin's service is not enabled by default when running Kibana locally, so that SO type is not registered with the system, and the SOM page doesn't attempt to search for it.

Why didn't CI catch this?

Cloud CI only runs a subset of our functional tests and integration tests, unfortunately this is a well known problem that is not easy to solve, for reasons that are a bit beyond me.

What can we do to fix this in the short term?

@dominiqueclarke @shahzad31 I see a few courses of action in the short term:

  1. Change the mapping for this field to text.
  2. Change this saved object type so it does not show up in the SOM page at all (remove the management object attribute in the saved object type registration).

Since this is an encrypted saved object that includes secrets which aren't exportable anyway, I think option (2) probably makes more sense. I'm not sure what the motivation is for users to export this saved object type if the secrets would be missing.

What can we do to fix this in the long term?

I don't think the SOM page should be constructing its ES query like this. It should use a finer-grained search for each individual saved object type. I'll let @elastic/kibana-core open a separate issue to address this limitation.

dominiqueclarke commented 2 years ago

Thank you for the write up @jportner

As far as option 1 or 2, I think we may want to consider both. name should be a text field to enable partial matching once we implement search into Monitor Management. For #2, hiding the saved objects from the SOM page, I agree but will defer to @paulb-elastic or @drewpost for product input.

shahzad31 commented 2 years ago

i think we potentially need to do both of the things.

we should change data types for name and few other fields, we will need search in any case on these fields going forward. name, type, urls, tags all should be text.

and we should also hide from saved object management and may be have in built export mechanism with in the uptime app, where we can control permission and have secrets decrypted in case user really want to export monitors.

shahzad31 commented 2 years ago

i have setup a draft here https://github.com/elastic/kibana/pull/130433

dominiqueclarke commented 2 years ago

@richkuz

I have tested this in cloud first test on an 8.2.0-SNAPSHOT release on Kibana commit 1cdf9c225497fbd8a19c22b3dd0e653b7448b60b.

Looks good to me. I don't see any errors when searching Kibana saved objects. However, I ran my test with a much smaller set of saved objects. Are there any tests you need to run on your side?

lukeelmers commented 2 years ago

I don't think the SOM page should be constructing its ES query like this. It should use a finer-grained search for each individual saved object type. I'll let @elastic/kibana-core open a separate issue to address this limitation.

Thanks for the ping. Opened https://github.com/elastic/kibana/issues/130616