Describe the bug
The OVS Multitenant flag caused our pre-production enterprise grade deployment of our application to fail as it had multiple components deployed across namespaces which were unable to communicate with each other. As a workaround, we had to explicitly allow this communication. We were not aware the script "https://github.com/microsoft/openshift-container-platform/blob/master/scripts/deployOpenShift.sh" changes the OCP 3.11 default SDN setting (which is "openshift-ovs-subnet") to "openshift-ovs-multitenant"
Describe the bug The OVS Multitenant flag caused our pre-production enterprise grade deployment of our application to fail as it had multiple components deployed across namespaces which were unable to communicate with each other. As a workaround, we had to explicitly allow this communication. We were not aware the script "https://github.com/microsoft/openshift-container-platform/blob/master/scripts/deployOpenShift.sh" changes the OCP 3.11 default SDN setting (which is "openshift-ovs-subnet") to "openshift-ovs-multitenant"
To Reproduce Use scripts as is "https://github.com/microsoft/openshift-container-platform/blob/master/scripts/deployOpenShift.sh" and have an application deployed across namespaces wherein pods of one namespace(s) talks to pods of the other namespace(s)
Expected behavior https://github.com/microsoft/openshift-container-platform/blob/master/scripts/deployOpenShift.sh should either not have the line "os_sdn_network_plugin_name='redhat/openshift-ovs-multitenant'" OR should consider to set to use the OCP 3.11 SDN default "os_sdn_network_plugin_name='redhat/openshift-ovs-subnet'" OR have it as a field in the parameters JSON file
Template Information (please complete the following information):