gshipley / installcentos

427 stars 455 forks source link

Blank white screen when opening openshift in browser #152

Open TheNotary opened 5 years ago

TheNotary commented 5 years ago

So I did quite a bit of trail and error and had been having many problems. I ultimately switched to installing from branch 3.10 (which maybe should be a tag to make things more clear). While this made my install successful, and I can use the oc command to login and view status, I can't view the web UI... it just shows a blanks screen when going to https://console.MYIP.nip.io:8443

Also, when I curl that API, I do see this response:

curl https://console.MYIP.nip.io:8443 -k
{
  "paths": [
    "/api",
    "/api/v1",
    "/apis",
    "/apis/",
    "/apis/admissionregistration.k8s.io",
    "/apis/admissionregistration.k8s.io/v1beta1",
    "/apis/apiextensions.k8s.io",
    "/apis/apiextensions.k8s.io/v1beta1",
    "/apis/apiregistration.k8s.io",
    "/apis/apiregistration.k8s.io/v1",
    "/apis/apiregistration.k8s.io/v1beta1",
    "/apis/apps",
    "/apis/apps.openshift.io",
    "/apis/apps.openshift.io/v1",
    "/apis/apps/v1",
    "/apis/apps/v1beta1",
    "/apis/apps/v1beta2",
    "/apis/authentication.k8s.io",
    "/apis/authentication.k8s.io/v1",
    "/apis/authentication.k8s.io/v1beta1",
    "/apis/authorization.k8s.io",
    "/apis/authorization.k8s.io/v1",
    "/apis/authorization.k8s.io/v1beta1",
    "/apis/authorization.openshift.io",
    "/apis/authorization.openshift.io/v1",
    "/apis/autoscaling",
    "/apis/autoscaling/v1",
    "/apis/autoscaling/v2beta1",
    "/apis/batch",
    "/apis/batch/v1",
    "/apis/batch/v1beta1",
    "/apis/build.openshift.io",
    "/apis/build.openshift.io/v1",
    "/apis/certificates.k8s.io",
    "/apis/certificates.k8s.io/v1beta1",
    "/apis/events.k8s.io",
    "/apis/events.k8s.io/v1beta1",
    "/apis/extensions",
    "/apis/extensions/v1beta1",
    "/apis/image.openshift.io",
    "/apis/image.openshift.io/v1",
    "/apis/network.openshift.io",
    "/apis/network.openshift.io/v1",
    "/apis/networking.k8s.io",
    "/apis/networking.k8s.io/v1",
    "/apis/oauth.openshift.io",
    "/apis/oauth.openshift.io/v1",
    "/apis/policy",
    "/apis/policy/v1beta1",
    "/apis/project.openshift.io",
    "/apis/project.openshift.io/v1",
    "/apis/quota.openshift.io",
    "/apis/quota.openshift.io/v1",
    "/apis/rbac.authorization.k8s.io",
    "/apis/rbac.authorization.k8s.io/v1",
    "/apis/rbac.authorization.k8s.io/v1beta1",
    "/apis/route.openshift.io",
    "/apis/route.openshift.io/v1",
    "/apis/scheduling.k8s.io",
    "/apis/scheduling.k8s.io/v1beta1",
    "/apis/security.openshift.io",
    "/apis/security.openshift.io/v1",
    "/apis/storage.k8s.io",
    "/apis/storage.k8s.io/v1",
    "/apis/storage.k8s.io/v1beta1",
    "/apis/template.openshift.io",
    "/apis/template.openshift.io/v1",
    "/apis/user.openshift.io",
    "/apis/user.openshift.io/v1",
    "/healthz",
    "/healthz/autoregister-completion",
    "/healthz/etcd",
    "/healthz/log",
    "/healthz/ping",
    "/healthz/poststarthook/apiservice-openapi-controller",
    "/healthz/poststarthook/apiservice-registration-controller",
    "/healthz/poststarthook/apiservice-status-available-controller",
    "/healthz/poststarthook/authorization.openshift.io-bootstrapclusterroles",
    "/healthz/poststarthook/authorization.openshift.io-ensureopenshift-infra",
    "/healthz/poststarthook/bootstrap-controller",
    "/healthz/poststarthook/ca-registration",
    "/healthz/poststarthook/generic-apiserver-start-informers",
    "/healthz/poststarthook/image.openshift.io-apiserver-caches",
    "/healthz/poststarthook/kube-apiserver-autoregistration",
    "/healthz/poststarthook/oauth.openshift.io-startoauthclientsbootstrapping",
    "/healthz/poststarthook/openshift.io-restmapperupdater",
    "/healthz/poststarthook/openshift.io-startinformers",
    "/healthz/poststarthook/openshift.io-webconsolepublicurl",
    "/healthz/poststarthook/project.openshift.io-projectauthorizationcache",
    "/healthz/poststarthook/project.openshift.io-projectcache",
    "/healthz/poststarthook/quota.openshift.io-clusterquotamapping",
    "/healthz/poststarthook/rbac/bootstrap-roles",
    "/healthz/poststarthook/scheduling/bootstrap-system-priority-classes",
    "/healthz/poststarthook/security.openshift.io-bootstrapscc",
    "/healthz/poststarthook/start-apiextensions-controllers",
    "/healthz/poststarthook/start-apiextensions-informers",
    "/healthz/poststarthook/start-kube-aggregator-informers",
    "/healthz/ready",
    "/metrics",
    "/oapi",
    "/oapi/v1",
    "/openapi/v2",
    "/swagger-2.0.0.json",
    "/swagger-2.0.0.pb-v1",
    "/swagger-2.0.0.pb-v1.gz",
    "/swagger.json",
    "/swaggerapi",
    "/version",
    "/version/openshift"
  ]
}

I'm not sure what to make of that, but when I look at the docker ps, I see containers with names that used variables from my prior install steps... so maybe the old containers aren't being properly overwritten?

#docker ps
CONTAINER ID        IMAGE                                    COMMAND                  CREATED             STATUS              PORTS               NAMES
f052189654ff        559f081fef44                             "/bin/bash -c '#!/..."   14 minutes ago      Up 14 minutes                           k8s_api_master-api-openshift.MYDOMAIN.com_kube-system_d400ab198a15ccbe597efa10e38d0719_3
781fe86eb0de        559f081fef44                             "/bin/bash -c '#!/..."   14 minutes ago      Up 14 minutes                           k8s_controllers_master-controllers-openshift.MYDOMAIN.com_kube-system_04335d3709e8228377883a762ef7f830_2
26575ac40b8d        registry.fedoraproject.org/latest/etcd   "/usr/bin/etcd"          14 minutes ago      Up 14 minutes                           etcd_container
ab3d80a71c52        docker.io/openshift/origin-pod:v3.11.0   "/usr/bin/pod"           14 minutes ago      Up 14 minutes                           k8s_POD_master-etcd-openshift.MYDOMAIN.com_kube-system_cce6e296eb6ab6f1aff45318fddc32f5_3
1f4c51f26827        docker.io/openshift/origin-pod:v3.11.0   "/usr/bin/pod"           14 minutes ago      Up 14 minutes                           k8s_POD_master-api-openshift.MYDOMAIN.com_kube-system_d400ab198a15ccbe597efa10e38d0719_4
bd423583326c        docker.io/openshift/origin-pod:v3.11.0   "/usr/bin/pod"           14 minutes ago      Up 14 minutes                           k8s_POD_master-controllers-openshift.MYDOMAIN.com_kube-system_04335d3709e8228377883a762ef7f830_3

In the mean time I'm going to try a fresh install of Centos 7 again and keep notes here as I move forward.

TheNotary commented 5 years ago

Steps to take after fresh minimal install completes:

yum update -y
yum install -y git
git clone https://github.com/gshipley/installcentos.git -b 3.10
cd installcentos
./install-openshift.sh
Domain to use: (MYIP.nip.io): 
Username: (root): 
Password: (password): 
OpenShift Version: (3.10): 
IP: (192.168.1.129): 
API Port: (8443): 

I notice some shell script failures:

.
.
.
Package git-1.8.3.1-20.el7.x86_64 already installed and latest version
No package zile available.
Package kexec-tools-2.0.15-21.el7.x86_64 already installed and latest version
Package 1:NetworkManager-1.12.0-8.el7_6.x86_64 already installed and latest version
No package python2-pip available.
.
.
.

This leads to a failed ansible task:

Failure summary:

  1. Hosts:    192.168.1.129
     Play:     Verify Requirements
     Task:     Run variable sanity checks
     Message:  last_checked_host: 192.168.1.129, last_checked_var: openshift_master_manage_htpasswd;Found removed variables: openshift_metrics_image_version is replaced by openshift_metrics_<component>_image; openshift_logging_elasticsearch_proxy_image_version is replaced by openshift_logging_elasticsearch_proxy_image; 
htpasswd: cannot create file /etc/origin/master/htpasswd
TheNotary commented 5 years ago

I'm seeing that the solution to getting things like zile to install correctly involves manually editing a file here /etc/yum.repos.d/epel.repo which seems to be misconfigured during some minimal installs of centos (mine is centos 7.5 I believe). For me the 7th line needed to be patched to say enabled=1.

TheNotary commented 5 years ago

After working around the issues with epel.repo, I'm still getting these various errors as though the script isn't able to install oc correctly and is failing to create a file:

Failure summary:

  1. Hosts:    192.168.1.129
     Play:     Verify Requirements
     Task:     Run variable sanity checks
     Message:  last_checked_host: 192.168.1.129, last_checked_var: openshift_master_manage_htpasswd;Found removed variables: openshift_metrics_image_version is replaced by openshift_metrics_<component>_image; openshift_logging_elasticsearch_proxy_image_version is replaced by openshift_logging_elasticsearch_proxy_image; 
htpasswd: cannot create file /etc/origin/master/htpasswd
./install-openshift.sh: line 155: oc: command not found
Failed to restart origin-master-api.service: Unit not found.
./install-openshift.sh: line 169: oc: command not found
created volume 1
./install-openshift.sh: line 169: oc: command not found
created volume 2
./install-openshift.sh: line 169: oc: command not found
created volume 3

Edit:

I'm thinking that the scripts in the repo never worked, and the things that I did to actually get the blank white screen were from downloading the binaries from https://www.okd.io/download.html and following their instructions, but also doing more stuff that I'll note below.

Setup the server

sudo true
sudo yum install -y wget git
cd ~
sudo systemctl stop firewalld
mkdir bin/
cd bin/
wget https://github.com/openshift/origin/releases/download/v3.11.0/openshift-origin-server-v3.11.0-0cbc58b-linux-64bit.tar.gz
tar -zxvf openshift-origin-server-v3.11.0-0cbc58b-linux-64bit.tar.gz
cd openshift-origin-server-v3.11.0-0cbc58b-linux-64bit

echo 'export PATH="/home/john/bin/openshift-origin-server-v3.11.0-0cbc58b-linux-64bit:$PATH"' >> ~/.bashrc

sudo ./openshift start

Setup the client tools

cd ~/bin
wget https://github.com/openshift/origin/releases/download/v3.11.0/openshift-origin-client-tools-v3.11.0-0cbc58b-linux-64bit.tar.gz
tar -zxvf openshift-origin-client-tools-v3.11.0-0cbc58b-linux-64bit.tar.gz
cd openshift-origin-client-tools-v3.11.0-0cbc58b-linux-64bit

sudo chmod +r "/home/john/bin/openshift-origin-server-v3.11.0-0cbc58b-linux-64bit/openshift.local.config/master/admin.kubeconfig"
echo 'export PATH="/home/john/bin/openshift-origin-client-tools-v3.11.0-0cbc58b-linux-64bit:$PATH"' >> ~/.bashrc
echo 'export KUBECONFIG=/home/john/bin/openshift-origin-server-v3.11.0-0cbc58b-linux-64bit/openshift.local.config/master/admin.kubeconfig' >> ~/.bashrc
echo 'export CURL_CA_BUNDLE=/home/john/bin/openshift-origin-server-v3.11.0-0cbc58b-linux-64bit/openshift.local.config/master/ca.crt' >> ~/.bashrc
source ~/.bashrc

From there the server is actually working, but it seems to show a blank white screen when trying to reach the app... possibly because console isn't in the HOST section of the web request.

Fix blank screen on console by installing the console but it probably won't work

Ref: https://github.com/openshift/origin/issues/20983#issuecomment-421924429

cd ~
mkdir src
cd src
git clone https://github.com/openshift/origin.git -b release-3.10
cd origin
oc login -u system:admin
oc create namespace openshift-web-console
oc process -f install/origin-web-console/console-template.yaml -p "API_SERVER_CONFIG=$(cat install/origin-web-console/console-config.yaml)" | oc apply -n openshift-web-console -f -

Install the integrated docker registry

Then fully test it out without using the web console:

oc login https://192.168.1.129:8443
test
test
oc new-project example
oc project example
oc new-app centos/ruby-25-centos7~https://github.com/sclorg/ruby-ex.git
oc expose svc/ruby-ex