elastic / kibana

Your window into the Elastic Stack
https://www.elastic.co/products/kibana
Other
19.76k stars 8.16k forks source link

Request failed with status code 503 when I click "Yes' for usage statistics. #22620

Open rluisr opened 6 years ago

rluisr commented 6 years ago

Kibana version: 6.4.0

Elasticsearch version: 6.4.0

Server OS version: CentOS 7.5.1804

Browser version: Chrome 68.0.3440.106

Browser OS version: OS X HighSierra

Original install method (e.g. download page, yum, from source, etc.): yum install via elastic repo

Describe the bug: Request Failed with status code 503 image

Steps to reproduce:

  1. Install kibana
  2. open Kibana
  3. click "Yes" into 「Help us improve the Elastic Stack by providing basic feature usage statistics? We will never share this data outside of Elastic. Read more」 dialog.

Expected behavior:

Screenshots (if relevant):

Errors in browser console (if relevant):

Provide logs and/or server output (if relevant):

Any additional context:

kobelb commented 6 years ago

Hey @rluisr can you include your Kibana server logs when this error is being thrown?

rluisr commented 6 years ago

Sorry I'm late.

the log is

Sep  7 18:28:47 es kibana: {"type":"log","@timestamp":"2018-09-07T09:28:47Z","tags":["warning","stats-collection"],"pid":16972,"message":"Unable to fetch data from kibana collector"}
Sep  7 18:28:47 es kibana: {"type":"error","@timestamp":"2018-09-07T09:28:47Z","tags":["warning","stats-collection"],"pid":16972,"level":"error","error":{"message":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][config:6.4.0]: routing [null]]","name":"Error","stack":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][config:6.4.0]: routing [null]] :: {\"path\":\"/.kibana/doc/config%3A6.4.0\",\"query\":{},\"statusCode\":503,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[{\\\"type\\\":\\\"no_shard_available_action_exception\\\",\\\"reason\\\":\\\"No shard available for [get [.kibana][doc][config:6.4.0]: routing [null]]\\\"}],\\\"type\\\":\\\"no_shard_available_action_exception\\\",\\\"reason\\\":\\\"No shard available for [get [.kibana][doc][config:6.4.0]: routing [null]]\\\"},\\\"status\\\":503}\"}\n    at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:307:15)\n    at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:266:7)\n    at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:159:7)\n    at IncomingMessage.bound (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/dist/lodash.js:729:21)\n    at emitNone (events.js:111:20)\n    at IncomingMessage.emit (events.js:208:7)\n    at endReadableNT (_stream_readable.js:1064:12)\n    at _combinedTickCallback (internal/process/next_tick.js:138:11)\n    at process._tickDomainCallback (internal/process/next_tick.js:218:9)"},"message":"[no_shard_available_action_exception] No shard available for [get [.kibana][doc][config:6.4.0]: routing [null]]"}
Sep  7 18:28:47 es kibana: {"type":"log","@timestamp":"2018-09-07T09:28:47Z","tags":["warning","stats-collection"],"pid":16972,"message":"Unable to fetch data from kibana_settings collector"}
rluisr commented 6 years ago

@kobelb

prozit commented 6 years ago

Hi, Did you find any solution yet? I am suffering exactly the same symptoms since I migrated from 5.6 to 64.

Best regards.

kobelb commented 6 years ago

pinging @elastic/kibana-monitoring

samcro1967 commented 6 years ago

Same here running the latest version of sebp/elk Docker container.

pickypg commented 6 years ago

It looks like the root cause is that the .kibana index went red based on the no shards available error.

kobelb commented 6 years ago

@pickypg should we consider suppressing these errors from the logs?

pickypg commented 6 years ago

@kobelb I don’t think so — at least not until we have a better error pass back through to the UI. The 503 is meaningless with the current details, but the log message indicates that .kibana is broken until that is fixed (and really nothing can be set in Kibana as a result).

dndtec commented 5 years ago

Im also facing the exact same issue

idiotek commented 5 years ago

I am also facing the exact same issue on 6.4.2

ojizero commented 5 years ago

Not sure if it is the same case with you, but in our case we're using the SearchGuard open source security plugin, and the Kibana server user was missing the indices:admin/template/put cluster level permission 🤔, adding the permission fixed the issue.

cachedout commented 5 years ago

Seem to be related to: https://github.com/elastic/kibana/issues/22842

sonam-tech commented 4 years ago

I have also faced the same issue.

Resolution Steps Taken 1.Check the space of the sever on which the EFK is running. 2.Check the health of all the indices by running the below cmd: curl -XGET http://localhost:9200/_cat/indices?v Output: health status index uuid pri rep docs.count docs.deleted store.size pri.store.size yellow open fluentd-20200927 09QnX8lMRx-JfgAxl5lnDw 5 1 9112123 0 6gb 6gb green open .monitoring-kibana-6-2020.09.28 AF9Q2PPjTJabhhMu5baGBA 1 0 189 0 168.2kb 168.2kb green open .monitoring-es-6-2020.09.28 umZ-JPLdRSKqQXk41EGNgg 1 0 1594 184 1.1mb 1.1mb yellow open fluentd-20200928 lc3bCTV3TQakewnrbhKj-w 5 1 3588643 0 2.8gb 2.8gb red open .kibana 3uBlKV5JQHuolQqrr9JAsw 1 0

3.Deleted the .kibana whose health is red.

4.Restarted all the EFK services/pods or containers.

5.Craete a new index for todays date.

And the issue is resolved and we started receiving the latest logs in Kibana Dashboard.