Closed alberttwong closed 8 years ago
@alberttwong what's the issue you're hitting as the console output above looks like it's installed correctly?
@rawlingsj isn't there a UI like the videos? Used to be at fabric8.cdk.vm
So... here's more... it seems like the service isn't starting.
[vagrant@rhel-cdk ~]$ oc get pods
NAME READY STATUS RESTARTS AGE
fabric8-bkgc7 0/1 Pending 0 12m
[vagrant@rhel-cdk ~]$ oc logs fabric8-bkgc7
Error from server: Internal error occurred: Pod "fabric8-bkgc7" in namespace "sample-project" : pod is not in 'Running', 'Succeeded' or 'Failed' state - State: "Pending"
It seems like it's stuck... let me try to kill it.
There is something wrong but I don't enough to understand how to get more info on debugging the issue.
does oc describe pod fabric8-bkgc7
or oc describe rc fabric8
give any clue why the pod's not starting?
restarted by vagrant halt and up and provision
vagrant@rhel-cdk ~]$ oc describe pod fabric8-lc588
Name: fabric8-lc588
Namespace: sample-project
Image(s): fabric8/fabric8-console:2.2.130
Node: rhel-cdk/10.0.2.15
Start Time: Wed, 11 May 2016 03:56:34 -0400
Labels: group=io.fabric8.apps,project=console,provider=fabric8,version=2.2.130
Status: Running
Reason:
Message:
IP: 172.17.0.2
Replication Controllers: fabric8 (1/1 replicas created)
Containers:
fabric8-container:
Container ID: docker://79432828b804606dbb5c1f9fff3e371743357b886f5db94895b8cfbae5b80d28
Image: fabric8/fabric8-console:2.2.130
Image ID: docker://cb332a0944585b04cfb5281deb325aa0d55d8b1af9aa40a917de19164c59a60b
QoS Tier:
cpu: BestEffort
memory: BestEffort
State: Running
Started: Wed, 11 May 2016 03:57:28 -0400
Ready: True
Restart Count: 0
Environment Variables:
GOOGLE_OAUTH_SCOPE: profile
OAUTH_AUTHORIZE_PORT: 8443
GOOGLE_OAUTH_CLIENT_ID:
OAUTH_AUTHORIZE_URI: https://rhel-cdk.10.1.2.2.xip.io:8443/oauth/authorize
GOOGLE_OAUTH_AUTHENTICATION_URI: https://accounts.google.com/o/oauth2/auth
GOOGLE_OAUTH_CLIENT_SECRET:
OAUTH_CLIENT_ID: fabric8
OAUTH_PROVIDER: openshift
GOOGLE_OAUTH_REDIRECT_URI: https://fabric8.rhel-cdk.10.1.2.2.xip.io
GOOGLE_OAUTH_TOKEN_URL: https://www.googleapis.com/oauth2/v3/token
KUBERNETES_NAMESPACE: sample-project (v1:metadata.namespace)
Conditions:
Type Status
Ready True
Volumes:
default-token-edcsw:
Type: Secret (a secret that should populate this volume)
SecretName: default-token-edcsw
Events:
FirstSeen LastSeen Count From SubobjectPath Reason Message
───────── ──────── ───── ──── ───────────── ────── ───────
7m 7m 1 {kubelet rhel-cdk} implicitly required container POD Pulled Container image "openshift3/ose-pod:v3.1.1.6" already present on machine
7m 7m 1 {scheduler } Scheduled Successfully assigned fabric8-lc588 to rhel-cdk
7m 7m 1 {kubelet rhel-cdk} implicitly required container POD Created Created with docker id bcfda9ae1be9
7m 7m 1 {kubelet rhel-cdk} implicitly required container POD Started Started with docker id bcfda9ae1be9
1m 1m 1 {kubelet rhel-cdk} implicitly required container POD Pulled Container image "openshift3/ose-pod:v3.1.1.6" already present on machine
1m 1m 1 {kubelet rhel-cdk} implicitly required container POD Created Created with docker id 7f4d66b98d98
1m 1m 1 {kubelet rhel-cdk} spec.containers{fabric8-container} Pulling pulling image "fabric8/fabric8-console:2.2.130"
1m 1m 1 {kubelet rhel-cdk} implicitly required container POD Started Started with docker id 7f4d66b98d98
1m 1m 1 {kubelet rhel-cdk} implicitly required container POD Pulled Container image "openshift3/ose-pod:v3.1.1.6" already present on machine
1m 1m 1 {kubelet rhel-cdk} implicitly required container POD Created Created with docker id f7669e61aaea
1m 1m 1 {kubelet rhel-cdk} implicitly required container POD Started Started with docker id f7669e61aaea
1m 1m 1 {kubelet rhel-cdk} spec.containers{fabric8-container} Pulling pulling image "fabric8/fabric8-console:2.2.130"
10s 10s 1 {kubelet rhel-cdk} spec.containers{fabric8-container} Pulled Successfully pulled image "fabric8/fabric8-console:2.2.130"
9s 9s 1 {kubelet rhel-cdk} spec.containers{fabric8-container} Created Created with docker id 79432828b804
9s 9s 1 {kubelet rhel-cdk} spec.containers{fabric8-container} Started Started with docker id 79432828b804
[vagrant@rhel-cdk ~]$ oc describe rc fabric8
Name: fabric8
Namespace: sample-project
Image(s): fabric8/fabric8-console:2.2.130
Selector: group=io.fabric8.apps,project=console,provider=fabric8,version=2.2.130
Labels: group=io.fabric8.apps,project=console,provider=fabric8,version=2.2.130
Replicas: 1 current / 1 desired
Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed
No volumes.
Events:
FirstSeen LastSeen Count From SubobjectPath Reason Message
───────── ──────── ───── ──── ───────────── ────── ───────
21m 21m 1 {replication-controller } SuccessfulCreate Created pod: fabric8-bkgc7
8m 8m 1 {replication-controller } SuccessfulCreate Created pod: fabric8-lc588
8m 8m 1 {replication-controller } SuccessfulCreate Created pod: fabric8-56v43
So that looks like the pod started ok?
yeah.. that was weird.... I get a restart worked... thanks!
It might have been that it took a while to pull the fabric8 console docker image, glad it's sorted.
Trying to run fabric8 on Red Hat Container Developer Kit 2.0.
Expecting fabric8 console at http://fabric8.rhel-cdk.10.1.2.2.xip.io/ or https://fabric8.rhel-cdk.10.1.2.2.xip.io/ but it doesn't seem to work.