operator-framework / operator-lifecycle-manager

A management framework for extending Kubernetes with Operators
https://olm.operatorframework.io
Apache License 2.0
1.72k stars 545 forks source link

`./scripts/run_console_local.sh` doesn't provide a usable console with `make run-local` or `make run-local-shift` #437

Closed djwhatle closed 6 years ago

djwhatle commented 6 years ago

I've been trying to get started with interacting with OLM via the Web UI and have run into issues using minikube / minishift via make run-local and make run-local-shift followed by ./scripts/run_console_local.sh to start the OKD console

Navigating to https://my_ip:8443 only shows the service catalog, nothing else, so I figured that the run_console_local.sh might help.

Once I am successful, I'm expecting that there will be a new section of the UI that I can interact with as shown in the screenshot from README.md ( below).

sub-view

Actual result is not as functional, menu buttons are clickable but nothing else much happens on the page.

I am also seeing a bunch of errors resulting from the ./scripts/run_console_local.sh script:

[dwhatley@precision-t operator-lifecycle-manager]$ ./scripts/run_console_local.sh 
Using https://192.168.42.152:8443
2018/08/29 20:40:27 cmd/main: cookies are not secure because base-address is not https!
2018/08/29 20:40:27 cmd/main: running with AUTHENTICATION DISABLED!
2018/08/29 20:40:27 cmd/main: Binding to 0.0.0.0:9000...
2018/08/29 20:40:27 cmd/main: not using TLS
2018/08/29 20:40:29 http: proxy error: dial tcp 192.168.42.152:8443: connect: connection refused
2018/08/29 20:40:29 http: proxy error: dial tcp 192.168.42.152:8443: connect: connection refused
[... Lots more of these ...]
2018/08/29 20:41:18 http: proxy error: dial tcp 192.168.42.152:8443: connect: connection refused
2018/08/29 20:41:19 http: proxy error: dial tcp 192.168.42.152:8443: connect: connection refused
2018/08/29 20:41:22 http: proxy error: dial tcp 192.168.42.152:8443: connect: connection refused
2018/08/29 20:41:24 http: proxy error: dial tcp 192.168.42.152:8443: connect: connection refused
2018/08/29 20:41:24 http: proxy error: dial tcp 192.168.42.152:8443: connect: connection refused
2018/08/29 20:41:25 http: proxy error: dial tcp 192.168.42.152:8443: i/o timeout
2018/08/29 20:41:25 http: proxy error: dial tcp 192.168.42.152:8443: i/o timeout
2018/08/29 20:41:25 http: proxy error: dial tcp 192.168.42.152:8443: i/o timeout

Will continue to debug, but looking for pointers from the experts on where to start :). Perhaps docs need an update as well if this isn't a working method anymore.

njhale commented 6 years ago

Could you post the output from make run-local?

djwhatle commented 6 years ago

@njhale make run-local works fine, but I still haven't figured out how to get a working OLM web UI.

It's possible that the sequence of commands I'm running is incorrect (make run-local; ./scripts/run_console_local.sh), but I'm expecting to be able to look at an OLM web UI after running these steps.

make run-local

[dwhatley@precision-t operator-lifecycle-manager]$ rm -rf ~/.minikube/

[dwhatley@precision-t operator-lifecycle-manager]$ minikube config set vm-driver kvm2
These changes will take effect upon a minikube delete and then a minikube start

[dwhatley@precision-t operator-lifecycle-manager]$ make run-local
. ./scripts/build_local.sh
Starting local Kubernetes v1.11.1 cluster...
Starting VM...
Downloading Minikube ISO
 160.27 MB / 160.27 MB [============================================] 100.00% 0s
Getting VM IP address...
Moving files into cluster...
Downloading kubeadm v1.11.1
Downloading kubelet v1.11.1
Finished Downloading kubeadm v1.11.1
Finished Downloading kubelet v1.11.1
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.
Loading cached images from config file.
Switched to context "minikube".
Sending build context to Docker daemon 93.59 MB
Step 1/36 : FROM golang:1.10 as builder
1.10: Pulling from library/golang
55cbf04beb70: Pull complete 
1607093a898c: Pull complete 
9a8ea045c926: Pull complete 
d4eee24d4dac: Pull complete 
9c35c9787a2f: Pull complete 
1701d74f449a: Pull complete 
ed9b85f3273c: Pull complete 
Digest: sha256:b38130c9826dc4eff5375f7deeac6dc9c2b9947f194229fac34712f549d03361
Status: Downloaded newer image for golang:1.10
 ---> 5b1054129196
Step 2/36 : LABEL builder=true
 ---> Running in b396221a6117
Removing intermediate container b396221a6117
 ---> a53ebf8a1b06
Step 3/36 : WORKDIR /go/src/github.com/operator-framework/operator-lifecycle-manager
Removing intermediate container b1a7981521bd
 ---> 0610be57eed9
Step 4/36 : RUN curl -L https://github.com/stedolan/jq/releases/download/jq-1.5/jq-linux64 -o /bin/jq
 ---> Running in a035070960e4
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   599    0   599    0     0   4009      0 --:--:-- --:--:-- --:--:--  4047
100 2956k  100 2956k    0     0  2186k      0  0:00:01  0:00:01 --:--:-- 3489k
Removing intermediate container a035070960e4
 ---> 86d609ae99ca
Step 5/36 : RUN chmod +x /bin/jq
 ---> Running in 3228c1acddc3
Removing intermediate container 3228c1acddc3
 ---> cae612d39a06
Step 6/36 : COPY . .
 ---> c556ef3ea639
Step 7/36 : RUN make build-coverage
 ---> Running in 993545ef45ea
building bin/catalog with coverage
building bin/olm with coverage
building bin/servicebroker with coverage
building bin/validator with coverage
Removing intermediate container 993545ef45ea
 ---> 42cdca7725bc
Step 8/36 : RUN go test -c -o /bin/e2e ./test/e2e/...
 ---> Running in 7a9c446b0e97
Removing intermediate container 7a9c446b0e97
 ---> 685b990a80d4
Step 9/36 : FROM alpine:latest as olm
latest: Pulling from library/alpine
8e3ba11ec2a2: Pull complete 
Digest: sha256:7043076348bf5040220df6ad703798fd8593a0918d06d3ce30c6c93be117e430
Status: Downloaded newer image for alpine:latest
 ---> 11cd0b38bc3c
Step 10/36 : LABEL olm=true
 ---> Running in 02fe0dd78842
Removing intermediate container 02fe0dd78842
 ---> 68b9786536a1
Step 11/36 : WORKDIR /
Removing intermediate container fc9c094f853d
 ---> ee8a73aa243c
Step 12/36 : COPY --from=builder /go/src/github.com/operator-framework/operator-lifecycle-manager/bin/olm /bin/olm
 ---> cd62576e2819
Step 13/36 : EXPOSE 8080
 ---> Running in 015c55a2d62c
Removing intermediate container 015c55a2d62c
 ---> 06a4ec941662
Step 14/36 : CMD ["/bin/olm"]
 ---> Running in 414bd4554e6d
Removing intermediate container 414bd4554e6d
 ---> df90c78feb6f
Step 15/36 : FROM alpine:latest as catalog
 ---> 11cd0b38bc3c
Step 16/36 : LABEL catalog=true
 ---> Running in 5c12a0bde24a
Removing intermediate container 5c12a0bde24a
 ---> c11ea2c0e391
Step 17/36 : WORKDIR /
Removing intermediate container 9dfa4fd61c03
 ---> 892a54b6a4a3
Step 18/36 : COPY --from=builder /go/src/github.com/operator-framework/operator-lifecycle-manager/bin/catalog /bin/catalog
 ---> 890ad022e17d
Step 19/36 : EXPOSE 8080
 ---> Running in def47e50da91
Removing intermediate container def47e50da91
 ---> eda9aa59173d
Step 20/36 : CMD ["/bin/catalog"]
 ---> Running in aee7b027fd64
Removing intermediate container aee7b027fd64
 ---> ec818db1ff5c
Step 21/36 : FROM alpine:latest as broker
 ---> 11cd0b38bc3c
Step 22/36 : LABEL broker=true
 ---> Running in 545d1354bf82
Removing intermediate container 545d1354bf82
 ---> 4e5286208311
Step 23/36 : WORKDIR /
Removing intermediate container ba85e6e79283
 ---> 0af34ee8fe5e
Step 24/36 : COPY --from=builder /go/src/github.com/operator-framework/operator-lifecycle-manager/bin/servicebroker /bin/servicebroker
 ---> 9c5a89386f52
Step 25/36 : EXPOSE 8080
 ---> Running in a1308abf24a6
Removing intermediate container a1308abf24a6
 ---> f9f817f57206
Step 26/36 : EXPOSE 8005
 ---> Running in c6f6328352e1
Removing intermediate container c6f6328352e1
 ---> b1e4661735cb
Step 27/36 : CMD ["/bin/servicebroker"]
 ---> Running in ea820a631b78
Removing intermediate container ea820a631b78
 ---> c60140bc6280
Step 28/36 : FROM golang:1.10
 ---> 5b1054129196
Step 29/36 : LABEL e2e=true
 ---> Running in 2ef3a1708933
Removing intermediate container 2ef3a1708933
 ---> bf4febe55e16
Step 30/36 : RUN mkdir -p /var/e2e
 ---> Running in ae69e07bd4a9
Removing intermediate container ae69e07bd4a9
 ---> 8c070ba9b3b1
Step 31/36 : WORKDIR /var/e2e
Removing intermediate container 94b4ab5a6e67
 ---> c29ce532f817
Step 32/36 : COPY --from=builder /bin/e2e /bin/e2e
 ---> 8161971c683d
Step 33/36 : COPY --from=builder /bin/jq /bin/jq
 ---> 42f943bd6394
Step 34/36 : COPY ./test/e2e/e2e.sh /var/e2e/e2e.sh
 ---> ef5cda53a466
Step 35/36 : COPY ./test/e2e/tap.jq /var/e2e/tap.jq
 ---> 7d90ea5b802c
Step 36/36 : CMD ["/bin/e2e"]
 ---> Running in dfd2578931d6
Removing intermediate container dfd2578931d6
 ---> 7a764dc95625
Successfully built 7a764dc95625
mkdir -p build/resources
. ./scripts/package-release.sh 1.0.0-local build/resources Documentation/install/local-values.yaml
wrote /tmp/tmp.6ujXW65pvE/chart/olm/templates/08-rh-operators.configmap.yaml
wrote /tmp/tmp.6ujXW65pvE/chart/olm/templates/01-alm-operator.serviceaccount.yaml
wrote /tmp/tmp.6ujXW65pvE/chart/olm/templates/03-clusterserviceversion.crd.yaml
wrote /tmp/tmp.6ujXW65pvE/chart/olm/templates/05-catalogsource.crd.yaml
wrote /tmp/tmp.6ujXW65pvE/chart/olm/templates/06-installplan.crd.yaml
wrote /tmp/tmp.6ujXW65pvE/chart/olm/templates/07-subscription.crd.yaml
wrote /tmp/tmp.6ujXW65pvE/chart/olm/templates/20-aggregated-edit.clusterrole.yaml
wrote /tmp/tmp.6ujXW65pvE/chart/olm/templates/21-aggregated-view.clusterrole.yaml
wrote /tmp/tmp.6ujXW65pvE/chart/olm/templates/02-alm-operator.rolebinding.yaml
wrote /tmp/tmp.6ujXW65pvE/chart/olm/templates/12-alm-operator.deployment.yaml
wrote /tmp/tmp.6ujXW65pvE/chart/olm/templates/13-catalog-operator.deployment.yaml
wrote /tmp/tmp.6ujXW65pvE/chart/olm/templates/10-rh-operators.catalogsource.yaml
. ./scripts/install_local.sh local build/resources
namespace "local" created
serviceaccount "olm-operator-serviceaccount" replaced
clusterrolebinding.rbac.authorization.k8s.io "olm-operator-binding-local" replaced
customresourcedefinition.apiextensions.k8s.io "clusterserviceversions.operators.coreos.com" replaced
customresourcedefinition.apiextensions.k8s.io "catalogsources.operators.coreos.com" replaced
customresourcedefinition.apiextensions.k8s.io "installplans.operators.coreos.com" replaced
customresourcedefinition.apiextensions.k8s.io "subscriptions.operators.coreos.com" replaced
configmap "rh-operators" replaced
catalogsource.operators.coreos.com "rh-operators" replaced
deployment.apps "alm-operator" replaced
deployment.apps "catalog-operator" replaced
clusterrole.rbac.authorization.k8s.io "aggregate-olm-edit" replaced
clusterrole.rbac.authorization.k8s.io "aggregate-olm-view" replaced
Waiting for rollout to finish: 0 of 1 updated replicas are available...
deployment "alm-operator" successfully rolled out
deployment "catalog-operator" successfully rolled out
[dwhatley@precision-t operator-lifecycle-manager]$ 

./scripts/run_console_local.sh

[dwhatley@precision-t operator-lifecycle-manager]$ ./scripts/run_console_local.sh 
Using https://192.168.39.13:8443
2018/08/30 14:57:10 cmd/main: cookies are not secure because base-address is not https!
2018/08/30 14:57:10 cmd/main: running with AUTHENTICATION DISABLED!
2018/08/30 14:57:10 cmd/main: Binding to 0.0.0.0:9000...
2018/08/30 14:57:10 cmd/main: not using TLS
2018/08/30 14:57:13 http: proxy error: dial tcp 192.168.39.13:8443: connect: connection refused
2018/08/30 14:57:13 http: proxy error: dial tcp 192.168.39.13:8443: connect: connection refused
2018/08/30 14:57:17 http: proxy error: dial tcp 192.168.39.13:8443: connect: connection refused
2018/08/30 14:57:17 http: proxy error: dial tcp 192.168.39.13:8443: connect: connection refused
[... lots more of these errors ...]
alecmerdler commented 6 years ago

My guess is the $KUBECONFIG environment variable is incorrectly set in the terminal session that you are running ./run_console_local.

djwhatle commented 6 years ago

@alecmerdler that's definitely possible since I didn't set $KUBECONFIG to anything. What would be an appropriate value to set in this case?

alecmerdler commented 6 years ago

It needs to be set to the same kubeconfig set by minikube start.

djwhatle commented 6 years ago

Perhaps I'm doing something wrong but after setting $KUBECONFIG to the config files created by minikube, I am still getting the same result.

[dwhatley@precision-t operator-lifecycle-manager]$ minikube status
minikube: Running
cluster: Running
kubectl: Correctly Configured: pointing to minikube-vm at 192.168.39.119

[dwhatley@precision-t operator-lifecycle-manager]$ cat ~/.kube/config 
apiVersion: v1
clusters:
- cluster:
    certificate-authority: /home/dwhatley/.minikube/ca.crt
    server: https://192.168.39.119:8443
  name: minikube
contexts:
- context:
    cluster: minikube
    user: minikube
  name: minikube
current-context: minikube
kind: Config
preferences: {}
users:
- name: minikube
  user:
    client-certificate: /home/dwhatley/.minikube/client.crt
    client-key: /home/dwhatley/.minikube/client.key

[dwhatley@precision-t operator-lifecycle-manager]$ echo $KUBECONFIG
/home/dwhatley/.kube/config

[dwhatley@precision-t operator-lifecycle-manager]$ kubectl get all --all-namespaces
NAMESPACE     NAME                                    READY     STATUS    RESTARTS   AGE
kube-system   coredns-78fcdf6894-s8ghk                1/1       Running   0          7m
kube-system   coredns-78fcdf6894-vccmr                1/1       Running   0          7m
kube-system   etcd-minikube                           1/1       Running   0          7m
kube-system   kube-addon-manager-minikube             1/1       Running   0          7m
kube-system   kube-apiserver-minikube                 1/1       Running   0          7m
kube-system   kube-controller-manager-minikube        1/1       Running   0          7m
kube-system   kube-proxy-2wzrd                        1/1       Running   0          7m
kube-system   kube-scheduler-minikube                 1/1       Running   0          7m
kube-system   kubernetes-dashboard-6f66c7fc56-7nkr8   1/1       Running   0          7m
kube-system   storage-provisioner                     1/1       Running   0          7m
local         alm-operator-6fb5bf7886-cs9w6           1/1       Running   0          4m
local         catalog-operator-69b586648-bslrt        1/1       Running   0          4m

NAMESPACE     NAME                   TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
default       kubernetes             ClusterIP   10.96.0.1       <none>        443/TCP         8m
kube-system   kube-dns               ClusterIP   10.96.0.10      <none>        53/UDP,53/TCP   8m
kube-system   kubernetes-dashboard   NodePort    10.108.74.127   <none>        80:30000/TCP    7m

NAMESPACE     NAME         DESIRED   CURRENT   READY     UP-TO-DATE   AVAILABLE   NODE SELECTOR                   AGE
kube-system   kube-proxy   1         1         1         1            1           beta.kubernetes.io/arch=amd64   8m

NAMESPACE     NAME                   DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
kube-system   coredns                2         2         2            2           8m
kube-system   kubernetes-dashboard   1         1         1            1           7m
local         alm-operator           1         1         1            1           4m
local         catalog-operator       1         1         1            1           4m

NAMESPACE     NAME                              DESIRED   CURRENT   READY     AGE
kube-system   coredns-78fcdf6894                2         2         2         7m
kube-system   kubernetes-dashboard-6f66c7fc56   1         1         1         7m
local         alm-operator-6fb5bf7886           1         1         1         4m
local         catalog-operator-69b586648        1         1         1         4m

NAMESPACE   NAME           AGE
local       rh-operators   4m

[dwhatley@precision-t operator-lifecycle-manager]$ ./scripts/run_console_local.sh 
Using https://192.168.39.119:8443
2018/08/30 19:51:47 cmd/main: cookies are not secure because base-address is not https!
2018/08/30 19:51:47 cmd/main: running with AUTHENTICATION DISABLED!
2018/08/30 19:51:47 cmd/main: Binding to 0.0.0.0:9000...
2018/08/30 19:51:47 cmd/main: not using TLS
2018/08/30 19:51:49 http: proxy error: dial tcp 192.168.39.119:8443: connect: connection refused
2018/08/30 19:51:49 http: proxy error: dial tcp 192.168.39.119:8443: connect: connection refused
2018/08/30 19:51:50 http: proxy error: dial tcp 192.168.39.119:8443: connect: connection refused
2018/08/30 19:51:51 http: proxy error: dial tcp 192.168.39.119:8443: connect: connection refused
[...]
alecmerdler commented 6 years ago

I'm convinced that this is an issue with networking between minikube and docker, because the kubectl commands execute successfully in the shell context, but the container started by docker run cannot connect to the cluster (connect: connection refused).

The solution is likely to add --net=host to the docker run command.

djwhatle commented 6 years ago

Adding --net=host to the docker run command solved my issue. Looks like there is already a PR up for this.

image