Open edmacabebe opened 6 years ago
I'm using minishift for latest origin 3.9 May I know if this is working?
I'v set the MINISHIFT_ENABLE_EXPERIMENTAL=y My config is as follows: minishift profile set local-cluster-ac minishift config set iso-url centos minishift config set vm-driver virtualbox minishift config set memory 12GB minishift config set cpus 3 minishift config set disk-size 100GB minishift config set openshift-version 3.9.0 minishift config set metrics true minishift config set logging true minishift config set host-config-dir ~/config minishift config set host-volumes-dir ~/volumes minishift config set host-data-dir ~/data minishift config set host-pv-dir ~/pv
I boot up the cluster in this manner minishift start --profile local-cluster-abc --service-catalog
Hurdling the above with a fresh cluster name, and then I applied only the ff default addons: anyuid, admin-user and registry-route, seems to have smoothly worked. But when I finally decided to go ahead apply the asb addon minishift addon apply ansible-service-broker The deployment of the ASB fails, with a loopbackcoff I'll try redeploying increasing the timeout from 600 to 1500.
Further, more drastic changes has to be made, like redeploying with a modified strategy of recreate, increased the timeout to 5 folds, and tagged it to release-1.1 APB is loading but due to volume it's taking time...we'll see.
@edmacabebe Which version of minishift you are using?
Mostly 3.9 but I do switch back to 3.7.2 or 1 on various profiles when I encounter blockers
On Fri, 13 Apr 2018 at 5:04 PM Budh Ram Gurung notifications@github.com wrote:
@edmacabebe https://github.com/edmacabebe Which version of minishift you are using?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/minishift/minishift-addons/issues/108#issuecomment-381073347, or mute the thread https://github.com/notifications/unsubscribe-auth/AHJszWeBgbSihv2HU_HWsUp8G9qAH7cJks5toGo4gaJpZM4TR5iF .
My last activity as posted here, actually ultimately corrupted my entire minishift setup in macos, where when everytime I boot it up, it constantly already rejects to support all the versions that I want to set. I'm reinstalling now, and trying out with the default minishift profile using 3.7.2. I hope to see and ID outright which of the addons are working. So far, among the large ones, I've consistently been able to successfully install the mini-che. Ansible-service-broker, Cockpit, istio, & fabric8 have so far been very painful.
I'm using minishift for latest origin 3.9
I got it from here that you are using minishift with origin 3.9. The addon ASB is tested with 3.7.0
Probably @eriknelson can help here what is wrong in 3.9.
mini-che. Ansible-service-broker, Cockpit, istio, & fabric8 have so far been very painful.
Fabric8 might not be stable now as the original authors are not maintaining it.
The ASB works on 3.7.2 Now on with APB finally!
Can still wait for a real solid 3.9 up.
Thanks bud!
@edmacabebe Renaming the issue that APB doesn't work with 3.9
Hi! Will investigate this today and report back with 3.9 details. Thanks for letting me know @budhrg.
@edmacabebe Using minishift v1.14.0+1ec5877
, I didn't have any issues installing and applying the ansible-service-broker
addon as of master. Can you update with your minishift version
, as well as oc version
outputs after you have a 3.9 minishift cluster up and running? From your error, it looks like the context is misconfigured such that your oc tool can't even talk to the cluster due to a corrupted x509 cert. I would also ask, before you apply ansible-service-broker
, can you create a new project manually? (oc new-project foo
)?
FWIW, I've submitted a couple issues while debugging this:
I'm using minishift v1.15.1+a5c47dd
I initialize minishift by calling: minishift delete -f && rm -rf ~/.minishift/profiles/local-cluster-aa
My new profile and configs are the ff:
minishift profile set local-cluster-aa minishift config set iso-url centos minishift config set vm-driver virtualbox minishift config set memory 12GB minishift config set cpus 3 minishift config set disk-size 100GB minishift config set openshift-version 3.9.0 minishift config set metrics true minishift config set logging true minishift config set host-config-dir ~/Dev/single-master-local-cluster/local-cluster-aa/config minishift config set host-volumes-dir ~/Dev/single-master-local-cluster/local-cluster-aa/volumes minishift config set host-data-dir ~/Dev/single-master-local-cluster/local-cluster-aa/data minishift config set host-pv-dir ~/Dev/single-master-local-cluster/local-cluster-aa/pv
Then I ran the ff cluster up scripts:
source ~/.bashrc <----This ensures the MINISHIFT_ENABLE_EXPERIMENTAL=y is set
minishift start --profile local-cluster-aa --service-catalog minishift addon apply anyuid admin-user registry-route <---to apply the default addons eval $(minishift docker-env) && eval $(minishift oc-env)
The end-result is -- Starting profile 'local-cluster-aa' -- Checking if requested OpenShift version 'v3.9.0' is valid ... OK -- Checking if requested OpenShift version 'v3.9.0' is supported ... OK -- Checking if requested hypervisor 'virtualbox' is supported on this platform ... OK -- Checking if VirtualBox is installed ... OK -- Checking the ISO URL ... OK -- Checking if provided oc flags are supported ... OK -- Starting local OpenShift cluster using 'virtualbox' hypervisor ... -- Minishift VM will be configured with ... Memory: 12 GB vCPUs : 3 Disk size: 100 GB -- Starting Minishift VM ........................................ OK -- Checking for IP address ... OK -- Checking for nameservers ... OK -- Checking if external host is reachable from the Minishift VM ... Pinging 8.8.8.8 ... FAIL VM is unable to ping external host -- Checking HTTP connectivity from the VM ... Retrieving http://minishift.io/index.html ... OK -- Checking if persistent storage volume is mounted ... OK -- Checking available disk space ... 1% used OK Importing 'openshift/origin:v3.9.0' . CACHE MISS Importing 'openshift/origin-docker-registry:v3.9.0' . CACHE MISS Importing 'openshift/origin-haproxy-router:v3.9.0' . CACHE MISS -- OpenShift cluster will be configured with ... Version: v3.9.0 Pulling image openshift/origin:v3.9.0 Pulled 1/4 layers, 26% complete Pulled 1/4 layers, 34% complete Pulled 1/4 layers, 41% complete Pulled 1/4 layers, 49% complete Pulled 1/4 layers, 58% complete Pulled 1/4 layers, 70% complete Pulled 2/4 layers, 79% complete Pulled 3/4 layers, 87% complete Pulled 3/4 layers, 91% complete Pulled 3/4 layers, 96% complete Pulled 4/4 layers, 100% complete Extracting Image pull complete Using nsenter mounter for OpenShift volumes Using 192.168.99.100 as the server IP Starting OpenShift using openshift/origin:v3.9.0 ... OpenShift server started.
The server is accessible via web console at: https://192.168.99.101:8443
The metrics service is available at: https://hawkular-metrics-openshift-infra.192.168.99.101.nip.io/hawkular/metrics
The kibana logging UI is available at: https://kibana-logging.192.168.99.101.nip.io
You are logged in as:
User: developer
Password:
To login as administrator: oc login -u system:admin
-- Exporting of OpenShift images is occuring in background process with pid 8294. -- Applying addon 'anyuid':. Add-on 'anyuid' changed the default security context constraints to allow pods to run as any user. Per default OpenShift runs containers using an arbitrarily assigned user ID. Refer to https://docs.openshift.org/latest/architecture/additional_concepts/authorization.html#security-context-constraints and https://docs.openshift.org/latest/creating_images/guidelines.html#openshift-origin-specific-guidelines for more information. -- Applying addon 'admin-user':.. -- Applying addon 'registry-route':........ Add-on 'registry-route' created docker-registry route. Please run following commands to login to the OpenShift docker registry: $ eval $(minishift docker-env) $ eval $(minishift oc-env)
If your deployed version of OpenShift is < 3.7.0 use.
$ docker login -u developer -p oc whoami -t
docker-registry-default.192.168.99.101.nip.io:443
If your deployed version of OpenShift is >= 3.7.0 use.
$ docker login -u developer -p oc whoami -t
docker-registry-default.192.168.99.101.nip.io
Installed addons Addon 'admin-role' installed Addon 'ansible-service-broker' installed Addon 'cockpit' installed Addon 'prometheus' installed Addon 'fabric8' installed Addon 'grafana' installed Addon 'kube-dashboard' installed Addon 'debezium' installed Addon 'management-infra' installed Addon 'che' installed
Enable addons Add-on 'anyuid' enabled Add-on 'admin-user' enabled Add-on 'registry-route' enabled Add-on 'admin-role' enabled Add-on 'che' enabled Add-on 'fabric8' enabled Add-on 'prometheus' enabled Add-on 'grafana' enabled Add-on 'management-infra' enabled Add-on 'ansible-service-broker' enabled Add-on 'cockpit' enabled
Then finally invoked minishift addon apply ansible-service-broker -- Applying addon 'ansible-service-broker':......... Ansible Service Broker Deployed User [ developer ] has been given permission for usage with the apb tools
All seems normal in logs & events...with rolling deployment..untill... an Unhealthy Readiness probe failed: Get https://172.17.0.9:1338/healthz: dial tcp 172.17.0.9:1338: getsockopt: connection refused
Showed-up in the events! My 3.7.2 results for the same set of scripts went out all right. But not for 3.9.0.
The logs that I got from openshift is this
| ============================================================ | == Starting Ansible Service Broker... == | ============================================================ | [2018-04-16T11:15:59.513Z] [NOTICE] - Initializing clients... | [2018-04-16T11:15:59.513Z] [DEBUG] - Trying to connect to etcd | time="2018-04-16T11:15:59Z" level=info msg="== ETCD CX ==" | time="2018-04-16T11:15:59Z" level=info msg="EtcdHost: asb-etcd.ansible-service-broker.svc" | time="2018-04-16T11:15:59Z" level=info msg="EtcdPort: 2379" | time="2018-04-16T11:15:59Z" level=info msg="Endpoints: [https://asb-etcd.ansible-service-broker.svc:2379 ]" | [2018-04-16T11:15:59.526Z] [INFO] - Etcd Version [Server: 3.3.3, Cluster: 3.3.0] | [2018-04-16T11:15:59.527Z] [DEBUG] - Connecting to Cluster | time="2018-04-16T11:15:59Z" level=info msg="OpenShift version: %vv3.9.0+f0a99e5-2" | time="2018-04-16T11:15:59Z" level=info msg="unable to retrieve the network plugin, defaulting to not joining networks - clusternetworks.network.openshift.io \"default\" not found" | time="2018-04-16T11:15:59Z" level=info msg="Kubernetes version: %vv1.9.1+a0ce1bc657" | [2018-04-16T11:15:59.539Z] [DEBUG] - Connecting Dao | [2018-04-16T11:15:59.539Z] [DEBUG] - Connecting Registry | [2018-04-16T11:15:59.54Z] [DEBUG] - Initializing WorkEngine | [2018-04-16T11:15:59.54Z] [DEBUG] - Creating AnsibleBroker | [2018-04-16T11:15:59.54Z] [INFO] - Initiating Recovery Process | [2018-04-16T11:15:59.54Z] [DEBUG] - Dao::FindByState | [2018-04-16T11:15:59.54Z] [INFO] - No jobs to recover | [2018-04-16T11:15:59.54Z] [NOTICE] - | [2018-04-16T11:15:59.54Z] [INFO] - Broker configured to bootstrap on startup | [2018-04-16T11:15:59.54Z] [INFO] - Attempting bootstrap... | [2018-04-16T11:15:59.54Z] [INFO] - AnsibleBroker::Bootstrap | [2018-04-16T11:15:59.54Z] [DEBUG] - Dao::BatchGetRaw | time="2018-04-16T11:15:59Z" level=info msg="== REGISTRY CX == " | time="2018-04-16T11:15:59Z" level=info msg="Name: dh" | time="2018-04-16T11:15:59Z" level=info msg="Type: dockerhub" | time="2018-04-16T11:15:59Z" level=info msg="Url: https://registry.hub.docker.com " | time="2018-04-16T11:15:59Z" level=info msg="== REGISTRY CX == " | time="2018-04-16T11:15:59Z" level=info msg="Name: localregistry" | time="2018-04-16T11:15:59Z" level=info msg="Type: local_openshift" | time="2018-04-16T11:15:59Z" level=info msg="Url: " | time="2018-04-16T11:16:03Z" level=info msg="APBs filtered by white/blacklist filter:-> ansibleplaybookbundle/kubevirt-ansible-> ansibleplaybookbundle/origin-ansible-service-broker-> ansibleplaybookbundle/mediawiki123-> ansibleplaybookbundle/ansible-service-broker-> ansibleplaybookbundle/apb-base-> ansibleplaybookbundle/apb-tools-> ansibleplaybookbundle/hello-world-> ansibleplaybookbundle/origin-service-catalog-> ansibleplaybookbundle/py-zip-demo-> ansibleplaybookbundle/photo-album-demo-app-> ansibleplaybookbundle/apb-assets-base-> ansibleplaybookbundle/helm-bundle-base-> ansibleplaybookbundle/origin-> ansibleplaybookbundle/photo-album-demo-api-> ansibleplaybookbundle/asb-installer-> ansibleplaybookbundle/deploy-broker-> ansibleplaybookbundle/manageiq-apb-runner-> ansibleplaybookbundle/origin-deployer-> ansibleplaybookbundle/origin-docker-registry-> ansibleplaybookbundle/origin-haproxy-router-> ansibleplaybookbundle/origin-pod-> ansibleplaybookbundle/origin-sti-builder-> ansibleplaybookbundle/origin-recycler" | time="2018-04-16T11:16:09Z" level=info msg="No runtime label found. Set runtime=1. Will use 'exec' to gather bind credentials" | time="2018-04-16T11:16:12Z" level=info msg="Didn't find encoded Spec label. Assuming image is not APB and skiping" | time="2018-04-16T11:16:17Z" level=info msg="No runtime label found. Set runtime=1. Will use 'exec' to gather bind credentials" | time="2018-04-16T11:16:23Z" level=info msg="Didn't find encoded Spec label. Assuming image is not APB and skiping"
@edmacabebe Based on the broker's logs you provided and the Readiness probe failed: Get https://172.17.0.9:1338/healthz: dial tcp 172.17.0.9:1338: getsockopt: connection refused
. I believe that the broker is taking longer than 15 seconds to come up. If you could bump the initialDelaySeconds
for the broker's readiness|liveness probes that would be very helpful.
To modify the deployment config: oc edit deploymentconfig -n ansible-service-broker asb
Then find initialDelaySeconds
for the readiness|liveness probes and bump to 120
(want to be certain the broker has all the time it needs).
Thank you for all of the logs & information, very helpful, hopefully we can get you past these issues. I'm hopeful that it's a bug in our deploymentconfig.
minishift addon apply ansible-service-broker -- Applying addon 'ansible-service-broker':.Unable to connect to the server: x509: certificate signed by unknown authority Error applying the add-on: Error executing command 'oc new-project ansible-service-broker'.