openshift / openshift-ansible

Install and config an OpenShift 3.x cluster
https://try.openshift.com
Apache License 2.0
2.18k stars 2.31k forks source link

Kibana UI is not accessible #4305

Closed mazzy89 closed 7 years ago

mazzy89 commented 7 years ago

Description

After installing Openshift logging, it's not possible to access to Kibana UI. Here the following vars defined in the inventory regarding the Openshift logging

openshift_logging_install_logging=True
openshift_logging_es_pvc_dynamic=True
openshift_logging_es_pvc_size=50Gi

The following vars deploys EFK stack but Kibana UI is not accessible because according to the router the url is kibana.router.default.svc.cluster.local which of course is not routable from the ext. Then I tried to deploy one more time adding the following property to the inventory:

openshift_logging_kibana_hostname=kibana.apps.mydomain.net

Kibana UI is still unaccessible. When I hit https://kibana.apps.mydomain.net the browser redirects me to the internal AWS DNS name for the master (something like https://ip-10-20-5-198.us-east-2.compute.internal/oauth/authorize?response_type=code&redirect_uri=https%3A%2F%2Fkibana.apps.mydomain.net%2Fauth%2Fopenshift%2Fcallback&client_id=kibana-proxy) and of course a DNS error occurs.

NAME                                 REVISION   DESIRED   CURRENT   TRIGGERED BY
dc/logging-curator                   1          1         1         config
dc/logging-es-data-master-6pksf167   1          1         1         config
dc/logging-kibana                    1          1         1         config

NAME                                   DESIRED   CURRENT   READY     AGE
rc/logging-curator-1                   1         1         1         4m
rc/logging-es-data-master-6pksf167-1   1         1         1         8m
rc/logging-kibana-1                    1         1         1         6m

NAME                    HOST/PORT                     PATH      SERVICES         PORT      TERMINATION          WILDCARD
routes/logging-kibana   kibana.apps.mydomain.net             logging-kibana   <all>     reencrypt/Redirect   None

NAME                     CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
svc/logging-es           172.30.47.15     <none>        9200/TCP   4h
svc/logging-es-cluster   172.30.187.136   <none>        9300/TCP   4h
svc/logging-kibana       172.30.195.95    <none>        443/TCP    4h

NAME                                         READY     STATUS    RESTARTS   AGE
po/logging-curator-1-dw7g2                   1/1       Running   0          4m
po/logging-es-data-master-6pksf167-1-890cr   1/1       Running   0          8m
po/logging-fluentd-09dbk                     1/1       Running   0          3m
po/logging-fluentd-5709s                     1/1       Running   0          3m
po/logging-fluentd-9cmw5                     1/1       Running   0          3m
po/logging-fluentd-9qz2w                     1/1       Running   0          3m
po/logging-fluentd-bbd1l                     1/1       Running   0          3m
po/logging-fluentd-clwjd                     1/1       Running   0          3m
po/logging-fluentd-mffw2                     1/1       Running   0          3m
po/logging-fluentd-st559                     1/1       Running   0          3m
po/logging-kibana-1-bpn2z                    2/2       Running   0          6m
[ec2-user@ip-10-20-5-198 ~]$ oc -n logging describe svc logging-kibana
Name:                   logging-kibana
Namespace:              logging
Labels:                 <none>
Selector:               component=kibana,provider=openshift
Type:                   ClusterIP
IP:                     172.30.195.95
Port:                   <unset> 443/TCP
Endpoints:              172.16.8.22:3000
Session Affinity:       None
No events.
Version
ansible 2.2.3.0
  config file = /usr/share/ansible/openshift-ansible/ansible.cfg
  configured module search path = Default w/o overrides

openshift-ansible-3.6.85-1-8-gc5f4a60
Expected Results

Access to Kibana UI from the public network

Observed Results

Kibana UI is not accessible.

dyegoe commented 7 years ago

maybe you can use this var in your inventory openshift_public_hostname I think your cause are because the ansible get the default hostname from the OS so you need to configure your public DNS entry to reach them. My problem is because the openshift_logging_kibana_hostname didn't work for me. The deployment keep the kibana.router.default.svc.cluster.local entry. So, when I configured at my /etc/hosts this dns entry to reach the HAproxy, I was redirected to the openshift and when I used my username and password I received "Unauthorized" message.

wozniakjan commented 7 years ago

That is due to authentication redirection. The additional variable you would like to setup is openshift_logging_master_public_url which defaults to the project URL. More information about this topic can be found here

mazzy89 commented 7 years ago

@wozniakjan ok thank you. gonna try to make the changes

mazzy89 commented 7 years ago

@wozniakjan I've tried with the following config

openshift_logging_install_logging=True
openshift_logging_es_pvc_dynamic=True
openshift_logging_es_pvc_size=50Gi
openshift_logging_kibana_hostname=kibana.apps.mydomain.net
openshift_logging_master_public_url=https://openshift-master.mydomain.net

and what I got is Unauthorized page. any idea? of course mydomain is mocked here. in my config there is my real domain ;-)

wozniakjan commented 7 years ago

The logging component gets deployed as system:admin user. For authentication purposes, there needs to be created a user with appropriate access control rights. Little bit about the topic can be found here and general information here. I will try to gather more up to date and detailed information

mazzy89 commented 7 years ago

@wozniakjan yes checked the list of accepted user and system:admin appears there but still it's very hard to understand how it works the mechanism. I tried to set up the cluster_reader in the inventory and no luck. I tried to change the configmap manually adding my username in the users but no luck.

mazzy89 commented 7 years ago

I tried to set up the var in the static inventory openshift_logging_elasticsearch_ops_allow_cluster_reader=True but nothing. still unauthorized

mazzy89 commented 7 years ago

It seems that something doesn't work in the image https://github.com/openshift/origin-aggregated-logging/issues/286

mazzy89 commented 7 years ago

The only error that I found on kibana-proxy was Could not read TLS opts from /secret/server-tls.json; error was: Error: ENOENT, no such file or directory '/secret/server-tls.json'

mazzy89 commented 7 years ago

Tried to fix the above error by fixing the creation of secrets under roles/openshift_logging_kibana/tasks/main.yaml. The server-tls.json file is not referenced well. Unfortunately still doesn't work

ewolinetz commented 7 years ago

@mazzy89 looks like we have a bug with the creation of the kibana configmap secret. It looks like the entry is being created as server_tls instead of server_tls.json. I will open a PR to address that today

mazzy89 commented 7 years ago

@ewolinetz yes but it's not this the problem. I've tried to manual fix in my local repo but still nothing

mazzy89 commented 7 years ago

@ewolinetz can you please validate if these two config are correct?

openshift_logging_kibana_hostname=kibana.apps.mydomain.net
openshift_logging_master_public_url=https://openshift-master.mydomain.net

I mean the master public url must be the one that the user hits to access to the openshift console and is the one that it's registered in the github oauth. while the kibana hostname must stay under apps?

ewolinetz commented 7 years ago

Those are correct. You can verify the correct value is being used if you oc describe dc/logging-kibana and look for OAP_PUBLIC_MASTER_URL

The kibana hostname is what is used for both the route and the oauthclient for Kibana, so as you have it should be fine.

mazzy89 commented 7 years ago

ok so then the problem is somewhere else

ewolinetz commented 7 years ago

This should resolve the error you saw in the kibana-proxy container though https://github.com/openshift/openshift-ansible/pull/4327

mazzy89 commented 7 years ago

@ewolinetz could be that actually I'm using a subdomain to setup oauth2? In other words the master_public_url is something like https://openshift-master.os.mydomain.net. is it related?

wozniakjan commented 7 years ago

@mazzy89 and do you see any error in the logs after #4327 was merged to master?

In my case, reason appears to be /auth/openshift/callback?error=access_denied&error_description=scope+denied%3A+user%3Afull which I will try to get resolved today

mazzy89 commented 7 years ago

Are you sure that it's merged? According to the discussion it seems not merged. The reason seems exactly like the mine

wozniakjan commented 7 years ago

mea culpa, I auto-assumed it got merged for some reason

mazzy89 commented 7 years ago

No problem. The most important thing is that you are able to reproduce the error and I'm not the only one is experimenting it

mazzy89 commented 7 years ago

@wozniakjan any resolution from your side?

wozniakjan commented 7 years ago

for now, I would recommend setting in your inventory openshift_logging_image_version=v1.5.1 and checkout release-1.5 branch of this repository and re-run logging playbook, that is possibly the latest working version. We are still investigating why master with latest images is not working

mazzy89 commented 7 years ago

ok definitely going to try but quite sure that it won't work either. i'll keep you update

mazzy89 commented 7 years ago

as you recommended now it works

wozniakjan commented 7 years ago

@mazzy89 this should be now fixed, feel free to try with master and latest images

jcantrill commented 7 years ago

Closing resolved since the auth proxy image was out of date.

whanklee commented 5 years ago

Hi.

Kibana UI is still not available.

I used these parameters for installation: openshift_master_cluster_hostname=master-internal.taopenshift.mkb.hu openshift_master_cluster_public_hostname=master.taopenshift.mkb.hu openshift_master_public_api_url=https://master.taopenshift.mkb.hu:8443 openshift_master_public_console_url=https://master.taopenshift.mkb.hu:8443/console openshift_master_default_subdomain=apps.taopenshift.mkb.hu

openshift_logging_use_ops=true openshift_logging_master_url=https://master-internal.taopenshift.mkb.hu:8443 openshift_logging_master_public_url=https://master-internal.taopenshift.mkb.hu:8443 openshift_logging_install_logging=true openshift_logging_purge_logging=true openshift_logging_install_eventrouter=true openshift_logging_curator_default_days=7 openshift_logging_curator_run_hour=2 openshift_logging_curator_run_minute=10 openshift_logging_kibana_hostname=kibana.apps.taopenshift.mkb.hu openshift_logging_es_cluster_size=3 openshift_logging_es_number_of_replicas=2 openshift_logging_es_number_of_shards=1 openshift_logging_es_nodeselector={"es_logging": "true"} openshift_logging_es_pvc_dynamic=false openshift_logging_es_pvc_size=5Gi openshift_logging_es_ops_nodeselector={"node-type": "infrastructure"} openshift_logging_fluentd_nodeselector={"logging-infra-fluentd": "true"}

OpenShift version is 3.11.

@ewolinetz, do you help me, please?

jcantrill commented 5 years ago

You should have no need to define any of the URLs to the master or api server. These should be discoverable during deployment.

whanklee commented 5 years ago

Understand, however, why doesn't it work?

jcantrill commented 5 years ago

Understand, however, why doesn't it work?

Logging is complicated. we can't comment why it doesn't work without evaluating your entire stack