Closed skunkr closed 4 years ago
Once I have configured openshift_master_logging_public_url=https://_openshift.console.public.url_:8443 and openshift_logging_master_url=https://kubernetes.default.svc logging seemd to be fine for both fluentd and kibana ui login
I am facing the same issue,
here is my inventory
# Create an OSEv3 group that contains the master, nodes, etcd, and lb groups.
# The lb group lets Ansible configure HAProxy as the load balancing solution.
# Comment lb out if your load balancer is pre-configured.
[OSEv3:children]
masters
nodes
etcd
glusterfs
# Set variables common for all OSEv3 hosts
[OSEv3:vars]
ansible_ssh_user=root
deployment_type=origin
openshift_release=3.11
# Enable htpasswd authentication
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'mappingMethod': 'add', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}]
openshift_master_htpasswd_file=/root/server_setup/htpasswd
openshift_master_cluster_method=native
# NOTE next option is incompatible with named certs
openshift_master_cluster_hostname=admin.os.clappy.cloud
openshift_master_cluster_public_hostname=admin.os.xxx.xxx
openshift_master_metrics_public_url=https://metrics.os.xxx.xxx/hawkular/metrics
openshift_master_logging_public_url=https://kibana.os.xxx.xxx
openshift_master_default_subdomain=clappy.cloud
openshift_hosted_router_certificate={"certfile": "/root/server_setup/letsencrypt/live/xxx.xxx/privkey.pem", "cafile": "/root/server_setup/letsencrypt_ca_bundle.pem"}
openshift_enable_olm=true
# Networking
os_sdn_network_plugin_name="redhat/openshift-ovs-multitenant"
osm_cluster_network_cidr=10.0.0.0/8
osm_host_subnet_length=12
# Custom certificates
openshift_master_named_certificates=[{"certfile": "/root/server_setup/letsencrypt/live/admin.os.xxx.xxx/fullchain.pem", "keyfile": "/root/server_setup/letsencrypt/live/admin.os.xxx.xxx/privkey.pem", "cafile": "/root/server_setup/letsencrypt_ca_bundle.pem"}]
# Custom docker options
openshift_docker_options="--log-level=warn --ipv6=false --log-driver=json-file"
# Use dnsmasq instead of skydns
openshift_use_dnsmasq=true
# Use iptables
os_firewall_use_firewalld=false
# Set log level
openshift_master_debug_level=0
openshift_node_debug_level=0
# Dynamic storage provisioning
openshift_master_dynamic_provisioning_enabled=true
# GlusterFS config
openshift_storage_glusterfs_namespace=app-storage
openshift_storage_glusterfs_storageclass=true
openshift_storage_glusterfs_storageclass_default=true
# cluster metrics
openshift_metrics_install_metrics=true
openshift_metrics_cassandra_storage_type=dynamic
# logging config
openshift_logging_install_logging=false
openshift_logging_fluentd_journal_read_from_head=true
openshift_logging_es_cluster_size=3
openshift_logging_es_pvc_dynamic=true
openshift_logging_es_number_of_replicas=1
openshift_logging_es_number_of_shards=5
openshift_logging_es_pvc_size=50G
openshift_logging_es_memory_limit=8G
openshift_logging_es_nodeselector={"node-role.kubernetes.io/infra": "true"}
# cluster monitoring and alerting
openshift_cluster_monitoring_operator_prometheus_storage_enabled=true
openshift_cluster_monitoring_operator_alertmanager_storage_enabled=true
# service catalog
openshift_enable_service_catalog=false
ansible_service_broker_install=false
template_service_broker_install=false
# host group for masters
[masters]
n[1:3].xxx.xxx
# host group for etcd
[etcd]
n[1:3].xxx.xxx
# host group for nodes, includes region info
[nodes]
n[1:3].xxx.xxx openshift_node_group_name='node-config-master-infra'
# glusterfs nodes
[glusterfs]
n[1:3].xxx.xxx glusterfs_devices='["/dev/sdb"]'
Issues go stale after 90d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen
.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen
.
If this issue is safe to close now please do so with /close
.
/lifecycle rotten /remove-lifecycle stale
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting /reopen
.
Mark the issue as fresh by commenting /remove-lifecycle rotten
.
Exclude this issue from closing again by commenting /lifecycle frozen
.
/close
@openshift-bot: Closing this issue.
Description
On a fresh installation, logging-fluentd pod is not starting, nor kibana ui can be accessed.
Version
Please put the following version information in the code block indicated below.
If you're operating from a git clone:
openshift-ansible-3.9.29-1-34-g1ecdd23
If you're running from playbooks installed via RPM
rpm -q openshift-ansible
Place the output between the code block below:
Steps To Reproduce
Expected Results
Describe what you expected to happen.
Observed Results
For long output or logs, consider using a gist
Additional Information
Fluend container in crash-loopback, and kibana ui can be accesed with admin credentials. /etc/ansible/hosts: openshift_logging_install_logging=true