Closed mazzy89 closed 7 years ago
maybe you can use this var in your inventory
openshift_public_hostname
I think your cause are because the ansible get the default hostname from the OS so you need to configure your public DNS entry to reach them.
My problem is because the openshift_logging_kibana_hostname
didn't work for me.
The deployment keep the kibana.router.default.svc.cluster.local
entry.
So, when I configured at my /etc/hosts
this dns entry to reach the HAproxy, I was redirected to the openshift and when I used my username and password I received "Unauthorized" message.
That is due to authentication redirection. The additional variable you would like to setup is openshift_logging_master_public_url
which defaults to the project URL. More information about this topic can be found here
@wozniakjan ok thank you. gonna try to make the changes
@wozniakjan I've tried with the following config
openshift_logging_install_logging=True
openshift_logging_es_pvc_dynamic=True
openshift_logging_es_pvc_size=50Gi
openshift_logging_kibana_hostname=kibana.apps.mydomain.net
openshift_logging_master_public_url=https://openshift-master.mydomain.net
and what I got is Unauthorized
page. any idea? of course mydomain
is mocked here. in my config there is my real domain ;-)
@wozniakjan yes checked the list of accepted user and system:admin
appears there but still it's very hard to understand how it works the mechanism. I tried to set up the cluster_reader in the inventory and no luck. I tried to change the configmap manually adding my username in the users but no luck.
I tried to set up the var in the static inventory openshift_logging_elasticsearch_ops_allow_cluster_reader=True
but nothing. still unauthorized
It seems that something doesn't work in the image https://github.com/openshift/origin-aggregated-logging/issues/286
The only error that I found on kibana-proxy was Could not read TLS opts from /secret/server-tls.json; error was: Error: ENOENT, no such file or directory '/secret/server-tls.json'
Tried to fix the above error by fixing the creation of secrets under roles/openshift_logging_kibana/tasks/main.yaml
. The server-tls.json
file is not referenced well. Unfortunately still doesn't work
@mazzy89 looks like we have a bug with the creation of the kibana configmap secret. It looks like the entry is being created as server_tls
instead of server_tls.json
. I will open a PR to address that today
@ewolinetz yes but it's not this the problem. I've tried to manual fix in my local repo but still nothing
@ewolinetz can you please validate if these two config are correct?
openshift_logging_kibana_hostname=kibana.apps.mydomain.net
openshift_logging_master_public_url=https://openshift-master.mydomain.net
I mean the master public url must be the one that the user hits to access to the openshift console and is the one that it's registered in the github oauth. while the kibana hostname must stay under apps
?
Those are correct.
You can verify the correct value is being used if you oc describe dc/logging-kibana
and look for OAP_PUBLIC_MASTER_URL
The kibana hostname is what is used for both the route and the oauthclient for Kibana, so as you have it should be fine.
ok so then the problem is somewhere else
This should resolve the error you saw in the kibana-proxy container though https://github.com/openshift/openshift-ansible/pull/4327
@ewolinetz could be that actually I'm using a subdomain to setup oauth2? In other words the master_public_url is something like https://openshift-master.os.mydomain.net
. is it related?
@mazzy89 and do you see any error in the logs after #4327 was merged to master?
In my case, reason appears to be /auth/openshift/callback?error=access_denied&error_description=scope+denied%3A+user%3Afull
which I will try to get resolved today
Are you sure that it's merged? According to the discussion it seems not merged. The reason seems exactly like the mine
mea culpa, I auto-assumed it got merged for some reason
No problem. The most important thing is that you are able to reproduce the error and I'm not the only one is experimenting it
@wozniakjan any resolution from your side?
for now, I would recommend setting in your inventory openshift_logging_image_version=v1.5.1
and checkout release-1.5
branch of this repository and re-run logging playbook, that is possibly the latest working version. We are still investigating why master with latest images is not working
ok definitely going to try but quite sure that it won't work either. i'll keep you update
as you recommended now it works
@mazzy89 this should be now fixed, feel free to try with master and latest images
Closing resolved since the auth proxy image was out of date.
Hi.
Kibana UI is still not available.
I used these parameters for installation: openshift_master_cluster_hostname=master-internal.taopenshift.mkb.hu openshift_master_cluster_public_hostname=master.taopenshift.mkb.hu openshift_master_public_api_url=https://master.taopenshift.mkb.hu:8443 openshift_master_public_console_url=https://master.taopenshift.mkb.hu:8443/console openshift_master_default_subdomain=apps.taopenshift.mkb.hu
openshift_logging_use_ops=true openshift_logging_master_url=https://master-internal.taopenshift.mkb.hu:8443 openshift_logging_master_public_url=https://master-internal.taopenshift.mkb.hu:8443 openshift_logging_install_logging=true openshift_logging_purge_logging=true openshift_logging_install_eventrouter=true openshift_logging_curator_default_days=7 openshift_logging_curator_run_hour=2 openshift_logging_curator_run_minute=10 openshift_logging_kibana_hostname=kibana.apps.taopenshift.mkb.hu openshift_logging_es_cluster_size=3 openshift_logging_es_number_of_replicas=2 openshift_logging_es_number_of_shards=1 openshift_logging_es_nodeselector={"es_logging": "true"} openshift_logging_es_pvc_dynamic=false openshift_logging_es_pvc_size=5Gi openshift_logging_es_ops_nodeselector={"node-type": "infrastructure"} openshift_logging_fluentd_nodeselector={"logging-infra-fluentd": "true"}
OpenShift version is 3.11.
@ewolinetz, do you help me, please?
You should have no need to define any of the URLs to the master or api server. These should be discoverable during deployment.
Understand, however, why doesn't it work?
Understand, however, why doesn't it work?
Logging is complicated. we can't comment why it doesn't work without evaluating your entire stack
Description
After installing Openshift logging, it's not possible to access to Kibana UI. Here the following vars defined in the inventory regarding the Openshift logging
The following vars deploys EFK stack but Kibana UI is not accessible because according to the router the url is
kibana.router.default.svc.cluster.local
which of course is not routable from the ext. Then I tried to deploy one more time adding the following property to the inventory:Kibana UI is still unaccessible. When I hit
https://kibana.apps.mydomain.net
the browser redirects me to the internal AWS DNS name for the master (something like https://ip-10-20-5-198.us-east-2.compute.internal/oauth/authorize?response_type=code&redirect_uri=https%3A%2F%2Fkibana.apps.mydomain.net%2Fauth%2Fopenshift%2Fcallback&client_id=kibana-proxy) and of course a DNS error occurs.Version
Expected Results
Access to Kibana UI from the public network
Observed Results
Kibana UI is not accessible.