opensearch-project / security-dashboards-plugin

🔐 Manage your internal users, roles, access control, and audit logs from OpenSearch Dashboards
https://opensearch.org/docs/latest/security-plugin/index/
Apache License 2.0
70 stars 152 forks source link

Tenant indices migration failed #102

Closed LucaBlackDragon closed 2 years ago

LucaBlackDragon commented 5 years ago

Setup:

Issue:

Kibana reamins in YELLOW status.

Contents of the Plugin Status table:

ID Status
plugin:opendistro_security@7.2.0 Tenant indices migration failed

The following error message is logged by Kibana (e.g. running kibana --verbose in a console window):

[error][migration] Authorization Exception :: {"path":"/_opendistro/_security/tenantinfo","query":{},"statusCode":403,"response":""}
    at respond (C:\elk\kibana-7.2.0-windows-x86_64\node_modules\elasticsearch\src\lib\transport.js:315:15)
    at checkRespForFailure (C:\elk\kibana-7.2.0-windows-x86_64\node_modules\elasticsearch\src\lib\transport.js:274:7)
    at HttpConnector.<anonymous> (C:\elk\kibana-7.2.0-windows-x86_64\node_modules\elasticsearch\src\lib\connectors\http.js:166:7)
    at IncomingMessage.wrapper (C:\elk\kibana-7.2.0-windows-x86_64\node_modules\elasticsearch\node_modules\lodash\lodash.js:4935:19)
    at IncomingMessage.emit (events.js:194:15)
    at endReadableNT (_stream_readable.js:1103:12)
    at process._tickCallback (internal/process/next_tick.js:63:19)

Similar issues were already signaled (#1, #17) but the proposed solutions don't have any effects in my case

abhijithb92 commented 4 years ago

I am facing the same issue with OpenDistro version 1.3.0. When I set this up using docker-compose I see a server status page with status as yellow saying tenant indices migration failed. When I run using the tar file, it runs properly.

When I disable multitenancy I am able to run Kibana in docker set up also.

takugawa commented 4 years ago

same problem 1.4.0 i need multitenant in 1.3.0 it's work

shubham25namdeo commented 4 years ago

I am facing this issue on Amazon's Elasticsearch service as well. I have multitenant architecture.

Update: Quick workaround I found is change disk space again on Elasticsearch settings, which causes migrations to happen again and resolved this issue.

Update2:(details regarding workaround) I've got a workaround for Amazon's Elasticsearch which worked for me for the time being, and may help others as well who are facing the same problem with this service in future until fix is finally added to AWS, I've added the step below:

  1. Once you encounter this error, wait for your nodes(data nodes, if dedicated nodes are present) to be active.
  2. Click on Edit domain in Elasticsearch service page on AWS.
  3. Change EBS storage size per node by 1GB (this will cause migrations to happen again and possibly fix indices migration error).
  4. Click Submit, and wait for domain to become active, once active try testing kibana url.
  5. If it does not work try changing EBS storage size per node by 1GB again.
  6. Last resort to be try changing Instance type and again after domain is active switch to previous type.

I've got this working solution by trial and error only. I am open for discussion on the same.

educoutinho commented 4 years ago

Same problem in Amazon's Elasticsearch service. Change disk space didn't work for me, Domain status is stuck in status "Processing" for more than 24 hours.

"plugin:opendistro_security@7.1.1" Status = Tenant indices migration failed

[GET] /_cat/shards .opendistro_security 0 p STARTED 6 37.7kb x.x.x.x 89f8cd6f5c5290bd199440087b8edee4 .opendistro_security 0 r UNASSIGNED

jsirianni commented 4 years ago

I am running into this on Open Distro 1.8. Kibana is containerized. It is my understanding that saved objet migration should only happen when upgrading versions, however, it seems to happen randomly without upgrading versions. Sometimes it fails and gives us this "yellow" state.

OrangeTimes commented 4 years ago

same issue here

happenedIn commented 4 years ago

I urgently require multi-tenancy. Has this been looked into yet?

seeing this on Amazon Elasticseach 7.4 plugin:opendistro_security@7.4.2 | Tenant indices migration failed

with amazon/opendistro-for-elasticsearch-kibana:1.4.0 docker image running locally

Tarasovych commented 4 years ago

Same issue here, running on Docker.

Added OPENDISTRO_SECURITY_MULTITENANCY_ENABLED=false to environment directive.

P. S. amazon/opendistro-for-elasticsearch-kibana:1.8.0

drock commented 4 years ago

Same issue here after modifying an AWS cluster. As a workaround, is it possible to delete the .opendistro_security index? If we did that, would it simply be re-created? Would we then have to re-setup our roles and mappings? In my case I don't have many so I can easily do that. This is a production cluster so any steps to fix it without access to the kibana or es nodes themselves would be greatly appreciated.

zengyan-amazon commented 4 years ago

@drock deleting the .opendistro_security index won't help for this issue. the .opendistro_security index index saves all security configurations like permissions, roles, and internal users. While the tenant indices migration issue happens when kibana failed to migrate the tenant indices like .kibana_<hash_code>_<tenant_name> .

For Opendistro for Elasticsearch, retarting Kibana process may fix this problem. if you have this issue with AWS Elasticsearch Service domain, you can cut a support case, then our engineers can help to fix it.

rkbennett commented 4 years ago

Same issue here as well, though admittedly, I am using the 1.10.x branch for 7.9.1, which I assume isn't general release yet.

datavistics commented 3 years ago

Same issue here Opendistro Version: 1.9.0 Happens when I start the docker-compose from scratch

linbingdouzhe commented 3 years ago

same here when i enabled elasticsearch.requestHeadersWhitelist: ["securitytenant","Authorization"] opendistro_security.multitenancy.enabled: true opendistro_security.multitenancy.tenants.enable_global: true opendistro_security.multitenancy.tenants.enable_private: true opendistro_security.multitenancy.tenants.preferred: ["Private", "Global"] opendistro_security.multitenancy.enable_filter: true

if i disable these config , it will recovery, but i really need this feature

opendistro version : 1.3.0

dtakis commented 3 years ago

The same issue noticed with open distro kibana version 1.9.0 when trying to connect to AWS Open Distro for Elasticsearch (7.8).

davidlago commented 2 years ago

Apologies for the late reply. Given the amount of time that has passed since these issues were filed, I'm not sure if they are still a problem in the current versions of OpenSearch, or if you were able to find a solution for this. Also, current docker images starting from scratch do not have this behavior (cc @datavistics). In any case, if this is still an issue with current versions, please feel free to re-open and we'll take a look!