Closed LucaBlackDragon closed 2 years ago
I am facing the same issue with OpenDistro version 1.3.0. When I set this up using docker-compose I see a server status page with status as yellow saying tenant indices migration failed. When I run using the tar file, it runs properly.
When I disable multitenancy I am able to run Kibana in docker set up also.
same problem 1.4.0 i need multitenant in 1.3.0 it's work
I am facing this issue on Amazon's Elasticsearch service as well. I have multitenant architecture.
Update: Quick workaround I found is change disk space again on Elasticsearch
settings, which causes migrations to happen again and resolved this issue.
Update2:(details regarding workaround) I've got a workaround for Amazon's Elasticsearch which worked for me for the time being, and may help others as well who are facing the same problem with this service in future until fix is finally added to AWS, I've added the step below:
I've got this working solution by trial and error only. I am open for discussion on the same.
Same problem in Amazon's Elasticsearch service. Change disk space didn't work for me, Domain status is stuck in status "Processing" for more than 24 hours.
"plugin:opendistro_security@7.1.1" Status = Tenant indices migration failed
[GET] /_cat/shards .opendistro_security 0 p STARTED 6 37.7kb x.x.x.x 89f8cd6f5c5290bd199440087b8edee4 .opendistro_security 0 r UNASSIGNED
I am running into this on Open Distro 1.8. Kibana is containerized. It is my understanding that saved objet migration should only happen when upgrading versions, however, it seems to happen randomly without upgrading versions. Sometimes it fails and gives us this "yellow" state.
same issue here
I urgently require multi-tenancy. Has this been looked into yet?
seeing this on Amazon Elasticseach 7.4 plugin:opendistro_security@7.4.2 | Tenant indices migration failed
with amazon/opendistro-for-elasticsearch-kibana:1.4.0 docker image running locally
Same issue here, running on Docker.
Added OPENDISTRO_SECURITY_MULTITENANCY_ENABLED=false
to environment
directive.
P. S. amazon/opendistro-for-elasticsearch-kibana:1.8.0
Same issue here after modifying an AWS cluster. As a workaround, is it possible to delete the .opendistro_security index? If we did that, would it simply be re-created? Would we then have to re-setup our roles and mappings? In my case I don't have many so I can easily do that. This is a production cluster so any steps to fix it without access to the kibana or es nodes themselves would be greatly appreciated.
@drock deleting the .opendistro_security index
won't help for this issue. the .opendistro_security index
index saves all security configurations like permissions, roles, and internal users. While the tenant indices migration issue happens when kibana failed to migrate the tenant indices like .kibana_<hash_code>_<tenant_name>
.
For Opendistro for Elasticsearch, retarting Kibana process may fix this problem. if you have this issue with AWS Elasticsearch Service domain, you can cut a support case, then our engineers can help to fix it.
Same issue here as well, though admittedly, I am using the 1.10.x branch for 7.9.1, which I assume isn't general release yet.
Same issue here Opendistro Version: 1.9.0 Happens when I start the docker-compose from scratch
same here when i enabled elasticsearch.requestHeadersWhitelist: ["securitytenant","Authorization"] opendistro_security.multitenancy.enabled: true opendistro_security.multitenancy.tenants.enable_global: true opendistro_security.multitenancy.tenants.enable_private: true opendistro_security.multitenancy.tenants.preferred: ["Private", "Global"] opendistro_security.multitenancy.enable_filter: true
if i disable these config , it will recovery, but i really need this feature
opendistro version : 1.3.0
The same issue noticed with open distro kibana version 1.9.0 when trying to connect to AWS Open Distro for Elasticsearch (7.8).
Apologies for the late reply. Given the amount of time that has passed since these issues were filed, I'm not sure if they are still a problem in the current versions of OpenSearch, or if you were able to find a solution for this. Also, current docker images starting from scratch do not have this behavior (cc @datavistics). In any case, if this is still an issue with current versions, please feel free to re-open and we'll take a look!
Setup:
opendistro_security.multitenancy.enabled: true
option inkibana.yml
(and# Kibana multitenancy
section uncommented inplugins\opendistro_security\securityconfig\config.yml
, though it did'nt seem to have any effects)Issue:
Kibana reamins in YELLOW status.
Contents of the Plugin Status table:
The following error message is logged by Kibana (e.g. running
kibana --verbose
in a console window):Similar issues were already signaled (#1, #17) but the proposed solutions don't have any effects in my case