kubernetes / cloud-provider-openstack

Apache License 2.0
619 stars 610 forks source link

[occm] LB Pool not updated when proxy-protocol service annotation changed #1167

Closed seanschneeweiss closed 3 years ago

seanschneeweiss commented 4 years ago

Is this a BUG REPORT or FEATURE REQUEST?:

/kind bug

What happened: A change to the service annotation

is not picked up by the openstack-cloud-controller-manager. Changes are only propagated to the OpenStack loadbalancer pool when it is initially created.

What you expected to happen: Changing the above service annotation should be supported and the pool updated/recreated accordingly.

How to reproduce it: Add the service annotation to an existing service of type LoadBalancer.

apiVersion: v1
kind: Service
metadata:
   name: test-service
   annotations:
     loadbalancer.openstack.org/proxy-protocol: "true"
spec:
   ports:
   - port: 22000
     protocol: TCP
     targetPort: 8080
   type: LoadBalancer

Anything else we need to know?: Updating the loadbalancer pool protocol is not supported by Octavia. See the Octavia API pool documentation Update a pool detail. For updating the protocol, the pool has to be deleted and recreated keeping the following in mind:

This issue is a follow up from #1149.

Environment:

Sean Schneeweiss sean.schneeweiss@daimler.com, Daimler TSS GmbH, legal info/Impressum

bgagnon commented 4 years ago

I would classify this as a bug, not a missing feature.

seanschneeweiss commented 4 years ago

/kind bug

seanschneeweiss commented 4 years ago

/remove-kind feature

fejta-bot commented 3 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

lingxiankong commented 3 years ago

/remove-lifecycle stale

fejta-bot commented 3 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale

seanschneeweiss commented 3 years ago

/remove-lifecycle stale Might add a fix soon.

chrischdi commented 3 years ago

After talking to @seanschneeweiss I created the PR which should fix this bug :-)