Closed menardorama closed 6 years ago
Additional information
[root@os-master1 tmenard]# oc get clusternetwork default -o yaml apiVersion: v1 hostsubnetlength: 9 kind: ClusterNetwork metadata: creationTimestamp: 2017-11-28T09:42:38Z name: default resourceVersion: "310" selfLink: /oapi/v1/clusternetworks/default uid: 787683b7-d420-11e7-aa55-0050568a2c6e network: 10.128.0.0/14 pluginName: redhat/openshift-ovs-multitenant serviceNetwork: 172.30.0.0/16
Does anyone knows if setting up vsphere directly at the install is at least supported ?
OK I reply to myself....
So the issue is related to the to_padded_yaml filter function in charge of converting the variables to a yaml syntax.
Defining the osm_controller_args (or osm_api_server_args or openshift_node_kubelet_args) in the inventory host file doesn't work when using double-quotes.
Even trying to escape them is not working.
The workarround is to define those variables in a group vars using yaml syntax (instead of INI syntax)
So it looks like that :
osm_controller_args: cloud-provider: - \"vsphere\" cloud-config: - \"/etc/vsphere/vsphere.conf\" osm_api_server_args: cloud-provider: - \"vsphere\" cloud-config: - \"/etc/vsphere/vsphere.conf\" openshift_node_kubelet_args: cloud-provider: - \"vsphere\" cloud-config: - \"/etc/vsphere/vsphere.conf\" image-gc-high-threshold: - '85' image-gc-low-threshold: - '80' max-pods: - '250' pods-per-core: - '10'
But unfortunately due to the poor vsphere implementation in openshift 3.6, I am hitting a bug where NodeIP conflict with cluster network.... https://bugzilla.redhat.com/show_bug.cgi?id=1433236
So my conclusion is that you can't deploy openshift 3.6 with a vsphere configuration, this have to be in two phases.
This come to my second question, why is there a reference architecture for vmware that is not working ? https://github.com/openshift/openshift-ansible-contrib/tree/master/reference-architecture/vmware-ansible
Maybe I'll be more lucky in 3.7
Description
I am trying to deploy openshift hosted on a esxi cluster, I have provisionned centos 7 VM with all the prerequists. I use the playbook found in playbooks/byo/config.yml and tried different configurations.
Using Container Native storage is working fine, I can even deploy registry, logging and metrics on a dedicated nfs share. The install complete successfully and openshift is working as expected.
Now as we are hosting the openshift platform I try tyo use the vsphere storage provider. Defining on an existing cluster is working as expected.
But the remaining issue I find is when I want to define this storage provider in the inventory in order to get it at the installation level.
Masters are correctly deployed but the nodes fails to start, it remains an issue the the SDN controller.
The deployement of course fail and I can see this output on journald on the nodes
Version
Steps To Reproduce
Expected Results
The installation process should complete correctly
Observed Results
The restart node task is failing
journald log : https://gist.github.com/menardorama/202d78f0a03a61e882e5fa079294724a
Additional Information
CentOS Linux release 7.4.1708 (Core)