att-comdev / openstack-helm

PROJECT HAS MOVED TO OPENSTACK
https://github.com/openstack/openstack-helm
69 stars 41 forks source link

ovs-db not in sync with neutron/values.yaml #298

Closed Ananth-vr closed 7 years ago

Ananth-vr commented 7 years ago

Is this a bug report or feature request? (choose one): bug report

Kubernetes Version (output of kubectl version): Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.2", GitCommit:"08e099554f3c31f6e6f07b448ab3ed78d0520507", GitTreeState:"clean", BuildDate:"2017-01-12T04:57:25Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.4", GitCommit:"7243c69eb523aa4377bce883e7c0dd76b84709a1", GitTreeState:"clean", BuildDate:"2017-03-07T23:34:32Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"} Helm Client and Tiller Versions (output of helm version):

Development or Deployment Environment?: deployment Explanation: Post deployment edited interface in neutron/values.yaml

from external_bridge: br-ex ip_address: 0.0.0.0 interface: external: enp12s0f0 default: enp11s0f0 . . . ovs: auto_bridge_add: br-physnet1: enp11s0f0

to

external_bridge: br-ex ip_address: 0.0.0.0 interface: external: eth1 default: eth2 . . . .

ovs: auto_bridge_add: br-physnet1: eth2

compiled ,purged and re installed neutron chart ,but ovs-db has information about all 4 interfaces.

Expected Behavior: "ovs-vsctl show" should not show old interfaces

How to Reproduce the Issue (as minimally as possible): install with default interfaces in values.yaml and then change to new interface names in values.yaml, purge and re install charts Any Additional Comments: minor issue

intlabs commented 7 years ago

@krrypto This is expected behavior at present and is a classic engineering tradeoff: We store the ovsdb on a tmpfs mounted from the host into the pod (/run), this lets us maintain state of OvS during upgrades/pod restarts. However, this means that if you change a large/significant configuration param that you will either need to manually intervene (eg run a job on all nodes, or ssh fun..) or simply restart the node (which will wipe the ovsdb). There are ways that could try and catch all corner cases (like device names changing) but this would come at significant cost without some very careful consideration: primarily that operators may wish/need to perform other operations that make use of OvS, so though we could probably safely manage the ovsdb with an iron fist in a development environment it could lead to very significant pain in a more realistic deployment.