Open warmchang opened 7 years ago
After check the script, I found the inconsistency between "/root/.oc/profiles/" and "/root/.kube/config":
[root@appab-myproject ~]# ll /root/.oc/profiles/ total 0 drwxr-xr-x. 7 root root 124 Jun 30 12:40 example drwxr-xr-x. 7 root root 124 Jun 30 13:28 workshop drwxr-xr-x. 7 root root 124 Jun 30 13:50 workshop2
the /root/.kube/config has no info about workshop2.
This is caused by the forced shutdown before the "oc-cluster up workshop2" finished executing. Delete the workshop2 fouder and rerun oc-cluster up, then all thing is ok.
Rerun the oc-cluster up in the above condition, can saw the following error:
[root@appab-myproject ~]# oc-cluster up workshop2
# Using client for origin v1.5.1
[INFO] Running a previously created cluster
oc cluster up --version v1.5.1 --image openshift/origin --public-hostname 127.0.0.1 --routing-suffix apps.127.0.0.1.nip.io --host-data-dir /root/.oc/profiles/workshop2/data --host-config-dir /root/.oc/profiles/workshop2/config --host-pv-dir /root/.oc/profiles/workshop2/pv --use-existing-config -e TZ=CST
-- Checking OpenShift client ... OK
-- Checking Docker client ... OK
-- Checking Docker version ... OK
-- Checking for existing OpenShift container ... OK
-- Checking for openshift/origin:v1.5.1 image ... OK
-- Checking Docker daemon configuration ... OK
-- Checking for available ports ...
WARNING: Binding DNS on port 8053 instead of 53, which may not be resolvable from all clients.
-- Checking type of volume mount ...
Using nsenter mounter for OpenShift volumes
-- Creating host directories ... OK
-- Finding server IP ...
Using 192.168.10.130 as the server IP
-- Starting OpenShift container ...
Starting OpenShift using container 'origin'
Waiting for API server to start listening
OpenShift server started
-- Removing temporary directory ... OK
-- Checking container networking ... OK
-- Server Information ...
OpenShift server started.
The server is accessible via web console at:
https://127.0.0.1:8443
To login as administrator:
oc login -u system:admin
-- Permissions on profile dir fixed
error: no context exists with the name: "workshop2".
[root@appab-myproject ~]# oc-cluster list
I think can we fix the error above by script automatic? For example, adjust the order of the two commands. Run "${OC_BINARY} adm config use-context xxx" first, check the xxx context:
Or can do like this: If context xxx is wrong and the foulder "/root/.oc/profiles/xxx" is exist, we can just backup it (/root/.oc/profiles/xxx_backup) and renew the xxx, just like to make a new profile.
How about your opinion? Hope to hear the feedback from the contributors. Thanks.
Run oc-cluster destroy failed for reason "Permission denied":
The docker version & OS version:
Any advice to resolve this? thanks.