Closed letsjustfixit closed 7 years ago
The tool did not support windows. Well, better said, I haven't tested it, and probably does not work, but I'm happy to help you if you wanna test it.
You need to replace [profile] with a name:
E.g.
oc-cluster up test
Also home is not properly replaced "/home/[user]".
I don't have windows so I can't test it ☹️
ive just replaced those placeholders myself :) Don't worry about that. It's -supposed to be- working the same way as on linux. What caught my eye is that the file ${OPENSHIFT_HOST_CONFIG_DIR}/master/admin.kubeconfig really doesnt exists. now I've created the folder and touched that file. It's also strange that the openshift console is listening on https://10.0.75.2:8443/console/ instead of https://127.0.0.1:8443/console/ however the start command said its going to bind on 127.0.0.1...
I've added set -x so you can see how the variables are evaluated:
oc-cluster up upc1
+ DOC=https://github.com/openshift-evangelists/oc-cluster-wrapper/blob/master/README.adoc
++ basename /home/ackbar/.bin/oc-cluster-wrapper/oc-cluster
+ SCRIPT_NAME=oc-cluster
+ OC_BINARY=oc
+ SOURCE=/home/ackbar/.bin/oc-cluster-wrapper/oc-cluster
+ '[' -h /home/ackbar/.bin/oc-cluster-wrapper/oc-cluster ']'
+++ dirname /home/ackbar/.bin/oc-cluster-wrapper/oc-cluster
++ cd -P /home/ackbar/.bin/oc-cluster-wrapper
++ pwd
+ DIR=/home/ackbar/.bin/oc-cluster-wrapper
+ trap cleanupClusterAndExit SIGQUIT
+ __PLATFORM=unknown
++ uname
+ __UNAMESTR=Linux
+ [[ Linux == \L\i\n\u\x ]]
+ __PLATFORM=linux
+ [[ linux == \u\n\k\n\o\w\n ]]
++ oc version --request-timeout=1
++ grep oc
++ awk -F+ '{print $1}'
++ awk '{print $2}'
+ __VERSION=v1.5.1
+ '[' -z v1.5.1 ']'
+ oc cluster up --help
+ grep -- ' --image='
+ grep -q ose
+ __TYPE=origin
+ __IMAGE=openshift/origin
+ echo '# Using client for origin v1.5.1'
# Using client for origin v1.5.1
++ id -u
+ USER=1000
++ id -g
+ GROUP=1000
+ OPENSHIFT_HOME_DIR=/home/ackbar/.oc
+ OPENSHIFT_PROFILES_DIR=/home/ackbar/.oc/profiles
+ PLUGINS_DIR=/home/ackbar/.bin/oc-cluster-wrapper/plugins.d
+ OC_CLUSTER_PREFILL_PVS=10
++ date +%Z
+ OC_CLUSTER_TZ=DST
+ OC_CLUSTER_MOUNTS=/home/ackbar/.bin/oc-cluster-wrapper/mounts.json
+ M2_HOME=/home/ackbar/.m2
+ OS_DEFAULT_USER=developer
+ OS_DEFAULT_PROJECT=myproject
+ __RESTART=0
+ for plugin in '$DIR/plugins.d/*.global.plugin'
+ source /home/ackbar/.bin/oc-cluster-wrapper/plugins.d/prepull-images.global.plugin
+ for plugin in '$DIR/plugins.d/*.global.plugin'
+ source /home/ackbar/.bin/oc-cluster-wrapper/plugins.d/profilesnapshot.global.plugin
++ OPENSHIFT_SNAPSHOT_DIR=/home/ackbar/.oc/profiles/snapshots
+ for plugin in '$DIR/plugins.d/*.global.plugin'
+ source /home/ackbar/.bin/oc-cluster-wrapper/plugins.d/tag.global.plugin
+ for plugin in '$DIR/plugins.d/*.global.plugin'
+ source /home/ackbar/.bin/oc-cluster-wrapper/plugins.d/volumes.global.plugin
++ activeProfile
+++ '[' -f /home/ackbar/.oc/active_profile ']'
++ local _active_profile=
++ echo ''
+ '[' '' '!=' '' ']'
+ [[ 2 -gt 0 ]]
+ key=up
+ case $key in
+ shift
+ up upc1
+ '[' upc1 == -h ']'
+ '[' upc1 == --help ']'
+ local _profile=upc1
+ status
+ '[' upc1 == '' ']'
+ [[ upc1 == -* ]]
+ shift
+ '[' -e /home/ackbar/.bin/oc-cluster-wrapper/mounts-template.json ']'
+ '[' '!' -d /home/ackbar/.oc/profiles/upc1 ']'
+ echo '[INFO] Running a new cluster'
[INFO] Running a new cluster
+ local OPENSHIFT_HOST_DATA_DIR=/home/ackbar/.oc/profiles/upc1/data
+ local OPENSHIFT_HOST_CONFIG_DIR=/home/ackbar/.oc/profiles/upc1/config
+ local OPENSHIFT_HOST_VOLUMES_DIR=/home/ackbar/.oc/profiles/upc1/volumes
+ local OPENSHIFT_HOST_PLUGINS_DIR=/home/ackbar/.oc/profiles/upc1/plugins
+ local OPENSHIFT_HOST_PV_DIR=/home/ackbar/.oc/profiles/upc1/pv
+ mkdir -p /home/ackbar/.oc/profiles/upc1
+ mkdir -p /home/ackbar/.oc/profiles/upc1/config
+ mkdir -p /home/ackbar/.oc/profiles/upc1/data
+ mkdir -p /home/ackbar/.oc/profiles/upc1/volumes
+ mkdir -p /home/ackbar/.oc/profiles/upc1/plugins
+ mkdir -p /home/ackbar/.oc/profiles/upc1/pv
+ '[' '' ']'
+ local 'CMDLINE=oc cluster up'
+ '[' '!' -z '' ']'
+ CMDLINE+=' --version v1.5.1'
+ '[' '!' -z '' ']'
+ '[' '!' -z ']'
+ CMDLINE+=' --image openshift/origin'
+ '[' '!' -z ']'
+ OC_CLUSTER_PUBLIC_HOSTNAME=127.0.0.1
+ CMDLINE+=' --public-hostname 127.0.0.1'
+ '[' '!' -z ']'
+ OC_CLUSTER_ROUTING_SUFFIX=apps.127.0.0.1.nip.io
+ CMDLINE+=' --routing-suffix apps.127.0.0.1.nip.io'
+ '[' '!' -z ']'
+ '[' '!' -z ']'
+ CMDLINE+=' --host-data-dir /home/ackbar/.oc/profiles/upc1/data'
+ CMDLINE+=' --host-config-dir /home/ackbar/.oc/profiles/upc1/config'
+ supports pv
+ '[' pv == pv ']'
++ oc cluster up -h
++ grep host-pv-dir
++ wc -l
+ ret=1
+ '[' 1 -eq 0 ']'
+ return 0
+ CMDLINE+=' --host-pv-dir /home/ackbar/.oc/profiles/upc1/pv'
+ CMDLINE+=' --use-existing-config'
+ CMDLINE+=' -e TZ=DST'
+ echo apps.127.0.0.1.nip.io
+ echo 127.0.0.1
+ echo ''
+ echo 'oc cluster up --version v1.5.1 --image openshift/origin --public-hostname 127.0.0.1 --routing-suffix apps.127.0.0.1.nip.io --host-data-dir /home/ackbar/.oc/profiles/upc1/data --host-config-dir /home/ackbar/.oc/profiles/upc1/config --host-pv-dir /home/ackbar/.oc/profiles/upc1/pv --use-existing-config -e TZ=DST'
+ echo 'oc cluster up --version v1.5.1 --image openshift/origin --public-hostname 127.0.0.1 --routing-suffix apps.127.0.0.1.nip.io --host-data-dir /home/ackbar/.oc/profiles/upc1/data --host-config-dir /home/ackbar/.oc/profiles/upc1/config --host-pv-dir /home/ackbar/.oc/profiles/upc1/pv --use-existing-config -e TZ=DST'
oc cluster up --version v1.5.1 --image openshift/origin --public-hostname 127.0.0.1 --routing-suffix apps.127.0.0.1.nip.io --host-data-dir /home/ackbar/.oc/profiles/upc1/data --host-config-dir /home/ackbar/.oc/profiles/upc1/config --host-pv-dir /home/ackbar/.oc/profiles/upc1/pv --use-existing-config -e TZ=DST
+ '[' '!' -z ']'
+ eval 'oc cluster up --version v1.5.1 --image openshift/origin --public-hostname 127.0.0.1 --routing-suffix apps.127.0.0.1.nip.io --host-data-dir /home/ackbar/.oc/profiles/upc1/data --host-config-dir /home/ackbar/.oc/profiles/upc1/config --host-pv-dir /home/ackbar/.oc/profiles/upc1/pv --use-existing-config -e TZ=DST'
++ oc cluster up --version v1.5.1 --image openshift/origin --public-hostname 127.0.0.1 --routing-suffix apps.127.0.0.1.nip.io --host-data-dir /home/ackbar/.oc/profiles/upc1/data --host-config-dir /home/ackbar/.oc/profiles/upc1/config --host-pv-dir /home/ackbar/.oc/profiles/upc1/pv --use-existing-config -e TZ=DST
-- Checking OpenShift client ... OK
-- Checking Docker client ... OK
-- Checking Docker version ...
WARNING: Cannot verify Docker version
-- Checking for existing OpenShift container ... OK
-- Checking for openshift/origin:v1.5.1 image ... OK
-- Checking Docker daemon configuration ... OK
-- Checking for available ports ... OK
-- Checking type of volume mount ...
Using Docker shared volumes for OpenShift volumes
-- Creating host directories ... OK
-- Finding server IP ...
Using 10.0.75.2 as the server IP
-- Starting OpenShift container ...
Creating initial OpenShift configuration
Starting OpenShift using container 'origin'
Waiting for API server to start listening
OpenShift server started
-- Adding default OAuthClient redirect URIs ... OK
-- Installing registry ... OK
-- Installing router ... OK
-- Importing image streams ... OK
-- Importing templates ... OK
-- Login to server ... OK
-- Creating initial project "myproject" ... OK
-- Removing temporary directory ... OK
-- Checking container networking ... OK
-- Server Information ...
OpenShift server started.
The server is accessible via web console at:
https://127.0.0.1:8443
You are logged in as:
User: developer
Password: developer
To login as administrator:
oc login -u system:admin
+ status
+ echo upc1
+ [[ linux == \l\i\n\u\x ]]
++ internalProfileDir
++ echo /var/lib/origin/openshift.local.config
+ docker exec origin chown -R 1000:1000 /var/lib/origin/openshift.local.config
+ echo '-- Permissions on profile dir fixed'
-- Permissions on profile dir fixed
++ echo 127.0.0.1
++ tr -s . -
+ CONTEXT=default/127-0-0-1:8443/system:admin
+ oc adm policy add-cluster-role-to-group sudoer system:authenticated --config=/home/ackbar/.oc/profiles/upc1/config/master/admin.kubeconfig --context=default/127-0-0-1:8443/system:admin
error: stat /home/ackbar/.oc/profiles/upc1/config/master/admin.kubeconfig: no such file or directory
See 'oc adm policy add-cluster-role-to-group -h' for help and examples.
+ echo '-- Any user is sudoer. They can execute commands with '\''--as=system:admin'\'''
-- Any user is sudoer. They can execute commands with '--as=system:admin'
++ seq -f %02g 1 10
+ for i in '$(seq -f %02g 1 $OC_CLUSTER_PREFILL_PVS)'
+ create-volume vol01
+ '[' 1 -lt 1 ']'
++ cat /home/ackbar/.oc/active_profile
+ local __profile=upc1
+ local __volume=vol01
+ local __size=10Gi
+ local __path=/home/ackbar/.oc/profiles/upc1/volumes/vol01
+ [[ ! 10Gi =~ ^[[:digit:]]+[GM]i$ ]]
+ oc get persistentvolume vol01 --as=system:admin
+ internal.setup_pv_dir /home/ackbar/.oc/profiles/upc1/volumes/vol01
+ local dir=/home/ackbar/.oc/profiles/upc1/volumes/vol01
+ [[ ! -d /home/ackbar/.oc/profiles/upc1/volumes/vol01 ]]
+ mkdir -p /home/ackbar/.oc/profiles/upc1/volumes/vol01
+ [[ linux == \l\i\n\u\x ]]
+ chcon -t svirt_sandbox_file_t /home/ackbar/.oc/profiles/upc1/volumes/vol01
+ echo 'Not setting SELinux content for /home/ackbar/.oc/profiles/upc1/volumes/vol01'
+ chmod 777 /home/ackbar/.oc/profiles/upc1/volumes/vol01
+ cat
+ oc create -f /tmp/pv.yaml --as=system:admin
Error from server (Forbidden): error when creating "/tmp/pv.yaml": User "developer" cannot "impersonate" "systemusers" with name "system:admin" in project ""
+ rm /tmp/pv.yaml
+ echo 'Volume created in /home/ackbar/.oc/profiles/upc1/volumes/vol01'
+ for i in '$(seq -f %02g 1 $OC_CLUSTER_PREFILL_PVS)'
+ create-volume vol02
+ '[' 1 -lt 1 ']'
++ cat /home/ackbar/.oc/active_profile
+ local __profile=upc1
+ local __volume=vol02
+ local __size=10Gi
+ local __path=/home/ackbar/.oc/profiles/upc1/volumes/vol02
+ [[ ! 10Gi =~ ^[[:digit:]]+[GM]i$ ]]
+ oc get persistentvolume vol02 --as=system:admin
+ internal.setup_pv_dir /home/ackbar/.oc/profiles/upc1/volumes/vol02
+ local dir=/home/ackbar/.oc/profiles/upc1/volumes/vol02
+ [[ ! -d /home/ackbar/.oc/profiles/upc1/volumes/vol02 ]]
+ mkdir -p /home/ackbar/.oc/profiles/upc1/volumes/vol02
+ [[ linux == \l\i\n\u\x ]]
+ chcon -t svirt_sandbox_file_t /home/ackbar/.oc/profiles/upc1/volumes/vol02
+ echo 'Not setting SELinux content for /home/ackbar/.oc/profiles/upc1/volumes/vol02'
+ chmod 777 /home/ackbar/.oc/profiles/upc1/volumes/vol02
+ cat
+ oc create -f /tmp/pv.yaml --as=system:admin
Error from server (Forbidden): error when creating "/tmp/pv.yaml": User "developer" cannot "impersonate" "systemusers" with name "system:admin" in project ""
+ rm /tmp/pv.yaml
+ echo 'Volume created in /home/ackbar/.oc/profiles/upc1/volumes/vol02'
+ for i in '$(seq -f %02g 1 $OC_CLUSTER_PREFILL_PVS)'
+ create-volume vol03
+ '[' 1 -lt 1 ']'
++ cat /home/ackbar/.oc/active_profile
+ local __profile=upc1
+ local __volume=vol03
+ local __size=10Gi
+ local __path=/home/ackbar/.oc/profiles/upc1/volumes/vol03
+ [[ ! 10Gi =~ ^[[:digit:]]+[GM]i$ ]]
+ oc get persistentvolume vol03 --as=system:admin
+ internal.setup_pv_dir /home/ackbar/.oc/profiles/upc1/volumes/vol03
+ local dir=/home/ackbar/.oc/profiles/upc1/volumes/vol03
+ [[ ! -d /home/ackbar/.oc/profiles/upc1/volumes/vol03 ]]
+ mkdir -p /home/ackbar/.oc/profiles/upc1/volumes/vol03
+ [[ linux == \l\i\n\u\x ]]
+ chcon -t svirt_sandbox_file_t /home/ackbar/.oc/profiles/upc1/volumes/vol03
+ echo 'Not setting SELinux content for /home/ackbar/.oc/profiles/upc1/volumes/vol03'
+ chmod 777 /home/ackbar/.oc/profiles/upc1/volumes/vol03
+ cat
+ oc create -f /tmp/pv.yaml --as=system:admin
Error from server (Forbidden): error when creating "/tmp/pv.yaml": User "developer" cannot "impersonate" "systemusers" with name "system:admin" in project ""
+ rm /tmp/pv.yaml
+ echo 'Volume created in /home/ackbar/.oc/profiles/upc1/volumes/vol03'
+ for i in '$(seq -f %02g 1 $OC_CLUSTER_PREFILL_PVS)'
+ create-volume vol04
+ '[' 1 -lt 1 ']'
++ cat /home/ackbar/.oc/active_profile
+ local __profile=upc1
+ local __volume=vol04
+ local __size=10Gi
+ local __path=/home/ackbar/.oc/profiles/upc1/volumes/vol04
+ [[ ! 10Gi =~ ^[[:digit:]]+[GM]i$ ]]
+ oc get persistentvolume vol04 --as=system:admin
+ internal.setup_pv_dir /home/ackbar/.oc/profiles/upc1/volumes/vol04
+ local dir=/home/ackbar/.oc/profiles/upc1/volumes/vol04
+ [[ ! -d /home/ackbar/.oc/profiles/upc1/volumes/vol04 ]]
+ mkdir -p /home/ackbar/.oc/profiles/upc1/volumes/vol04
+ [[ linux == \l\i\n\u\x ]]
+ chcon -t svirt_sandbox_file_t /home/ackbar/.oc/profiles/upc1/volumes/vol04
+ echo 'Not setting SELinux content for /home/ackbar/.oc/profiles/upc1/volumes/vol04'
+ chmod 777 /home/ackbar/.oc/profiles/upc1/volumes/vol04
+ cat
+ oc create -f /tmp/pv.yaml --as=system:admin
Error from server (Forbidden): error when creating "/tmp/pv.yaml": User "developer" cannot "impersonate" "systemusers" with name "system:admin" in project ""
+ rm /tmp/pv.yaml
+ echo 'Volume created in /home/ackbar/.oc/profiles/upc1/volumes/vol04'
+ for i in '$(seq -f %02g 1 $OC_CLUSTER_PREFILL_PVS)'
+ create-volume vol05
+ '[' 1 -lt 1 ']'
++ cat /home/ackbar/.oc/active_profile
+ local __profile=upc1
+ local __volume=vol05
+ local __size=10Gi
+ local __path=/home/ackbar/.oc/profiles/upc1/volumes/vol05
+ [[ ! 10Gi =~ ^[[:digit:]]+[GM]i$ ]]
+ oc get persistentvolume vol05 --as=system:admin
+ internal.setup_pv_dir /home/ackbar/.oc/profiles/upc1/volumes/vol05
+ local dir=/home/ackbar/.oc/profiles/upc1/volumes/vol05
+ [[ ! -d /home/ackbar/.oc/profiles/upc1/volumes/vol05 ]]
+ mkdir -p /home/ackbar/.oc/profiles/upc1/volumes/vol05
+ [[ linux == \l\i\n\u\x ]]
+ chcon -t svirt_sandbox_file_t /home/ackbar/.oc/profiles/upc1/volumes/vol05
+ echo 'Not setting SELinux content for /home/ackbar/.oc/profiles/upc1/volumes/vol05'
+ chmod 777 /home/ackbar/.oc/profiles/upc1/volumes/vol05
+ cat
+ oc create -f /tmp/pv.yaml --as=system:admin
Error from server (Forbidden): error when creating "/tmp/pv.yaml": User "developer" cannot "impersonate" "systemusers" with name "system:admin" in project ""
+ rm /tmp/pv.yaml
+ echo 'Volume created in /home/ackbar/.oc/profiles/upc1/volumes/vol05'
+ for i in '$(seq -f %02g 1 $OC_CLUSTER_PREFILL_PVS)'
+ create-volume vol06
+ '[' 1 -lt 1 ']'
++ cat /home/ackbar/.oc/active_profile
+ local __profile=upc1
+ local __volume=vol06
+ local __size=10Gi
+ local __path=/home/ackbar/.oc/profiles/upc1/volumes/vol06
+ [[ ! 10Gi =~ ^[[:digit:]]+[GM]i$ ]]
+ oc get persistentvolume vol06 --as=system:admin
+ internal.setup_pv_dir /home/ackbar/.oc/profiles/upc1/volumes/vol06
+ local dir=/home/ackbar/.oc/profiles/upc1/volumes/vol06
+ [[ ! -d /home/ackbar/.oc/profiles/upc1/volumes/vol06 ]]
+ mkdir -p /home/ackbar/.oc/profiles/upc1/volumes/vol06
+ [[ linux == \l\i\n\u\x ]]
+ chcon -t svirt_sandbox_file_t /home/ackbar/.oc/profiles/upc1/volumes/vol06
+ echo 'Not setting SELinux content for /home/ackbar/.oc/profiles/upc1/volumes/vol06'
+ chmod 777 /home/ackbar/.oc/profiles/upc1/volumes/vol06
+ cat
+ oc create -f /tmp/pv.yaml --as=system:admin
Error from server (Forbidden): error when creating "/tmp/pv.yaml": User "developer" cannot "impersonate" "systemusers" with name "system:admin" in project ""
+ rm /tmp/pv.yaml
+ echo 'Volume created in /home/ackbar/.oc/profiles/upc1/volumes/vol06'
+ for i in '$(seq -f %02g 1 $OC_CLUSTER_PREFILL_PVS)'
+ create-volume vol07
+ '[' 1 -lt 1 ']'
++ cat /home/ackbar/.oc/active_profile
+ local __profile=upc1
+ local __volume=vol07
+ local __size=10Gi
+ local __path=/home/ackbar/.oc/profiles/upc1/volumes/vol07
+ [[ ! 10Gi =~ ^[[:digit:]]+[GM]i$ ]]
+ oc get persistentvolume vol07 --as=system:admin
+ internal.setup_pv_dir /home/ackbar/.oc/profiles/upc1/volumes/vol07
+ local dir=/home/ackbar/.oc/profiles/upc1/volumes/vol07
+ [[ ! -d /home/ackbar/.oc/profiles/upc1/volumes/vol07 ]]
+ mkdir -p /home/ackbar/.oc/profiles/upc1/volumes/vol07
+ [[ linux == \l\i\n\u\x ]]
+ chcon -t svirt_sandbox_file_t /home/ackbar/.oc/profiles/upc1/volumes/vol07
+ echo 'Not setting SELinux content for /home/ackbar/.oc/profiles/upc1/volumes/vol07'
+ chmod 777 /home/ackbar/.oc/profiles/upc1/volumes/vol07
+ cat
+ oc create -f /tmp/pv.yaml --as=system:admin
Error from server (Forbidden): error when creating "/tmp/pv.yaml": User "developer" cannot "impersonate" "systemusers" with name "system:admin" in project ""
+ rm /tmp/pv.yaml
+ echo 'Volume created in /home/ackbar/.oc/profiles/upc1/volumes/vol07'
+ for i in '$(seq -f %02g 1 $OC_CLUSTER_PREFILL_PVS)'
+ create-volume vol08
+ '[' 1 -lt 1 ']'
++ cat /home/ackbar/.oc/active_profile
+ local __profile=upc1
+ local __volume=vol08
+ local __size=10Gi
+ local __path=/home/ackbar/.oc/profiles/upc1/volumes/vol08
+ [[ ! 10Gi =~ ^[[:digit:]]+[GM]i$ ]]
+ oc get persistentvolume vol08 --as=system:admin
+ internal.setup_pv_dir /home/ackbar/.oc/profiles/upc1/volumes/vol08
+ local dir=/home/ackbar/.oc/profiles/upc1/volumes/vol08
+ [[ ! -d /home/ackbar/.oc/profiles/upc1/volumes/vol08 ]]
+ mkdir -p /home/ackbar/.oc/profiles/upc1/volumes/vol08
+ [[ linux == \l\i\n\u\x ]]
+ chcon -t svirt_sandbox_file_t /home/ackbar/.oc/profiles/upc1/volumes/vol08
+ echo 'Not setting SELinux content for /home/ackbar/.oc/profiles/upc1/volumes/vol08'
+ chmod 777 /home/ackbar/.oc/profiles/upc1/volumes/vol08
+ cat
+ oc create -f /tmp/pv.yaml --as=system:admin
Error from server (Forbidden): error when creating "/tmp/pv.yaml": User "developer" cannot "impersonate" "systemusers" with name "system:admin" in project ""
+ rm /tmp/pv.yaml
+ echo 'Volume created in /home/ackbar/.oc/profiles/upc1/volumes/vol08'
+ for i in '$(seq -f %02g 1 $OC_CLUSTER_PREFILL_PVS)'
+ create-volume vol09
+ '[' 1 -lt 1 ']'
++ cat /home/ackbar/.oc/active_profile
+ local __profile=upc1
+ local __volume=vol09
+ local __size=10Gi
+ local __path=/home/ackbar/.oc/profiles/upc1/volumes/vol09
+ [[ ! 10Gi =~ ^[[:digit:]]+[GM]i$ ]]
+ oc get persistentvolume vol09 --as=system:admin
+ internal.setup_pv_dir /home/ackbar/.oc/profiles/upc1/volumes/vol09
+ local dir=/home/ackbar/.oc/profiles/upc1/volumes/vol09
+ [[ ! -d /home/ackbar/.oc/profiles/upc1/volumes/vol09 ]]
+ mkdir -p /home/ackbar/.oc/profiles/upc1/volumes/vol09
+ [[ linux == \l\i\n\u\x ]]
+ chcon -t svirt_sandbox_file_t /home/ackbar/.oc/profiles/upc1/volumes/vol09
+ echo 'Not setting SELinux content for /home/ackbar/.oc/profiles/upc1/volumes/vol09'
+ chmod 777 /home/ackbar/.oc/profiles/upc1/volumes/vol09
+ cat
+ oc create -f /tmp/pv.yaml --as=system:admin
Error from server (Forbidden): error when creating "/tmp/pv.yaml": User "developer" cannot "impersonate" "systemusers" with name "system:admin" in project ""
+ rm /tmp/pv.yaml
+ echo 'Volume created in /home/ackbar/.oc/profiles/upc1/volumes/vol09'
+ for i in '$(seq -f %02g 1 $OC_CLUSTER_PREFILL_PVS)'
+ create-volume vol10
+ '[' 1 -lt 1 ']'
++ cat /home/ackbar/.oc/active_profile
+ local __profile=upc1
+ local __volume=vol10
+ local __size=10Gi
+ local __path=/home/ackbar/.oc/profiles/upc1/volumes/vol10
+ [[ ! 10Gi =~ ^[[:digit:]]+[GM]i$ ]]
+ oc get persistentvolume vol10 --as=system:admin
+ internal.setup_pv_dir /home/ackbar/.oc/profiles/upc1/volumes/vol10
+ local dir=/home/ackbar/.oc/profiles/upc1/volumes/vol10
+ [[ ! -d /home/ackbar/.oc/profiles/upc1/volumes/vol10 ]]
+ mkdir -p /home/ackbar/.oc/profiles/upc1/volumes/vol10
+ [[ linux == \l\i\n\u\x ]]
+ chcon -t svirt_sandbox_file_t /home/ackbar/.oc/profiles/upc1/volumes/vol10
+ echo 'Not setting SELinux content for /home/ackbar/.oc/profiles/upc1/volumes/vol10'
+ chmod 777 /home/ackbar/.oc/profiles/upc1/volumes/vol10
+ cat
+ oc create -f /tmp/pv.yaml --as=system:admin
Error from server (Forbidden): error when creating "/tmp/pv.yaml": User "developer" cannot "impersonate" "systemusers" with name "system:admin" in project ""
+ rm /tmp/pv.yaml
+ echo 'Volume created in /home/ackbar/.oc/profiles/upc1/volumes/vol10'
+ echo '-- 10 Persistent Volumes are available for use'
-- 10 Persistent Volumes are available for use
+ oc adm policy add-role-to-user view system:serviceaccount:openshift-infra:hawkular -n openshift-infra --as=system:admin
Error from server (Forbidden): User "developer" cannot "impersonate" "systemusers" with name "system:admin" in project ""
+ oc adm policy add-cluster-role-to-user cluster-admin admin --as=system:admin
Error from server (Forbidden): User "developer" cannot "impersonate" "systemusers" with name "system:admin" in project ""
+ echo '-- User admin has been set as cluster administrator'
-- User admin has been set as cluster administrator
+ oc adm config set-cluster upc1 --server=https://127.0.0.1:8443 --insecure-skip-tls-verify=true
++ oc whoami -t
+ oc adm config set-credentials developer/upc1 --token=x1JyYDuRvvx8P0Grf8XF0bRhzmmeiJeguuqEWzt6Hlk
+ oc adm config set-context upc1 --cluster=upc1 --user=developer/upc1 --namespace=myproject
+ oc adm config use-context upc1
Switched to context "upc1".
+ echo '-- Adding an oc-profile=upc1 label to every generated image so they can be later removed'
-- Adding an oc-profile=upc1 label to every generated image so they can be later removed
+ configPatch '{\"admissionConfig\": {\"pluginConfig\": {\"BuildDefaults\": {\"configuration\": {\"apiVersion\": \"v1\",\"kind\": \"BuildDefaultsConfig\",\"imageLabels\": [{\"name\": \"oc-profile\",\"value\": \"upc1\"}]}}}}}'
+ echo 'Applying this path to the config: [{\"admissionConfig\": {\"pluginConfig\": {\"BuildDefaults\": {\"configuration\": {\"apiVersion\": \"v1\",\"kind\": \"BuildDefaultsConfig\",\"imageLabels\": [{\"name\": \"oc-profile\",\"value\": \"upc1\"}]}}}}}]'
+ cat
++ internalMasterConfigDir
++ echo /var/lib/origin/openshift.local.config/master
++ internalMasterConfigDir
++ echo /var/lib/origin/openshift.local.config/master
++ internalMasterConfigDir
++ echo /var/lib/origin/openshift.local.config/master
++ internalMasterConfigDir
++ echo /var/lib/origin/openshift.local.config/master
++ internalMasterConfigDir
++ echo /var/lib/origin/openshift.local.config/master
++ internalMasterConfigDir
++ echo /var/lib/origin/openshift.local.config/master
++ internalMasterConfigDir
++ echo /var/lib/origin/openshift.local.config/master
+ chmod 755 /tmp/patch_00001
+ docker cp /tmp/patch_00001 origin:/tmp/patch_00001
+ docker exec -t origin /usr/bin/bash /tmp/patch_00001
+ rm /tmp/patch_00001
+ markRestart
+ __RESTART=1
+ echo '[INFO] Cluster created succesfully'
[INFO] Cluster created succesfully
+ '[' 1 -eq 1 ']'
+ forceRestart
+ echo -n 'Restarting openshift. '
Restarting openshift. + docker stop origin
+ docker start origin
+ echo Done
Done
+ __RESTART=0
Well I've managed to log in with developer:developer after replacing 127.0.0.1 everywhere with 10.0.75.2, but receiving the same errors 😭
@jorgemoralespou Any ideas on this?
Since it'll take me some time to get a Windows box up and running, I recommend you using https://github.com/getwarped/powershift-cli.
Should be 99% close to this to as it's developed by a colleague and was developed mainly to support windows.
CC/ @grahamdumpleton
El 29 may. 2017 9:43, "Fixer" notifications@github.com escribió:
@jorgemoralespou https://github.com/jorgemoralespou Any ideas on this?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/openshift-evangelists/oc-cluster-wrapper/issues/74#issuecomment-304594962, or mute the thread https://github.com/notifications/unsubscribe-auth/AAEyDgJKBwzFgwgQeV-2xC9dXV-1EbUaks5r-ncIgaJpZM4NiMUw .
There are a lot of messy fiddles required to do things on Windows. The way oc-cluster-wrapper
worked, it used the host file system. Doing that on Windows causes various problems. To work properly on Windows I ended up moving things into the VM filesystem that were under home directory of host filesystem. Not sure that changing oc-cluster-wrapper
to do same would be worth it. So suggest giving powershift-cli
a go instead.
Thanks for the suggestion will do so :)
Feel free to close this issue, I've managed to move to powershift :)
Just be aware that v3.6.0-alpha.2
doesn't appear to work with powershift
. Right now I know this affects MacOS X, but don't know about Windows or Linux. The web console works and you can deploy applications, but it is not accepting connections on port 80/443 and routing to hosted applications therefore doesn't work. If you use just oc cluster up
it works okay, so something has been changed/broken when using saved configuration and or --public-hostname
option.
I've just tried to use this wrapper script on w10 bash+docker for windows, however this was the result of oc-cluster up [profile] Could you please point out what I might have missed? Thanks very much!