kubernetes / cloud-provider-openstack

Apache License 2.0
616 stars 601 forks source link

[occm] Allow ip address sharing for services of type loadbalancers #933

Closed chrischdi closed 2 years ago

chrischdi commented 4 years ago

Uncomment only one, leave it on its own line:

/kind bug /kind feature

The binaries affected:

What happened:

What you expected to happen:

How to reproduce it:

$ cat service.yaml 
apiVersion: v1
kind: Service
metadata:
  name: kuard-1
spec:
  type: LoadBalancer
  loadBalancerIP: 192.168.200.5
  ports:
  - port: 80
    protocol: TCP
    targetPort: 8080
  selector:
    app: kuard
$ cat service2.yaml 
apiVersion: v1
kind: Service
metadata:
  name: kuard-2
spec:
  type: LoadBalancer
  loadBalancerIP: 192.168.200.5
  ports:
  - port: 8080
    protocol: TCP
    targetPort: 8080
  selector:
    app: kuard
$ kubectl apply -f service.yaml -f service2.yaml

Anything else we need to know?:

What get's touched by implementing it:

There may be other things I didn't consider here.

Environment:

kayrus commented 4 years ago

Sorry for advertising this, but it may help you: https://github.com/kayrus/ingress-terraform

lingxiankong commented 4 years ago

@chrischdi, thanks for proposing this feature request.

sbueringer commented 4 years ago

Just some input. I think we shouldn't force the user to define the loadBalancerIP because he might not know which IPs are available in OpenStack. An alternative solution for the name would be in my opinion to don't set a name and add all associated services to the tags of the LB resources. This would be the easiest from a user perspective.

chrischdi commented 4 years ago

Just some input. I think we shouldn't force the user to define the loadBalancerIP because he might not know which IPs are available in OpenStack. An alternative solution for the name would be in my opinion to don't set a name and add all associated services to the tags of the LB resources. This would be the easiest from a user perspective.

Or use the metadata/tags for creation of the lb and after knowing the ip address we could rename the loadbalancer

Other alternative: use a second annotation which needs to define a unique name, which then will be shared on all services which should share the loadbalancer.

hamzazafar commented 4 years ago

Just some input. I think we shouldn't force the user to define the loadBalancerIP because he might not know which IPs are available in OpenStack. An alternative solution for the name would be in my opinion to don't set a name and add all associated services to the tags of the LB resources. This would be the easiest from a user perspective.

Or use the metadata/tags for creation of the lb and after knowing the ip address we could rename the loadbalancer

Other alternative: use a second annotation which needs to define a unique name, which then will be shared on all services which should share the loadbalancer.

Or why not simply pass the ID of external loadbalancer in an annotation. Workflow: Create 1st service -> External LB gets created -> Get ID of LB using OpenStack CLI -> Create 2nd service and pass ID of existing LB in an annotation

This requires adding a new function to get loadbalancer by ID.

Some additional things to take care of:

hamzazafar commented 4 years ago

Currently OCCM deletes all obsolete resources (e.g listeners attached to a loadbalancer that don't correspond to any ports defined in a service): https://github.com/kubernetes/cloud-provider-openstack/blob/27b70e3ded626783dbb0d83e1a58dd5cacbed37d/pkg/cloudprovider/providers/openstack/openstack_loadbalancer.go#L1181

In view of a service, all resources (listeners, pools, members etc.) created by other services will be considered as obsolete and eventually deleted.

A lot of refactoring is required to support this feature.

chrigl commented 4 years ago

Tags are unfortunately not an option as long as we support Neutron LBaaS v2 and Octavia < 2.5 :/

sbueringer commented 4 years ago

@lingxiankong @hamzazafar @chrigl

If I/we would commit to implementing this, is there interest in merging this if we come up with a feasible solution?

I think the main problems are:

So what I want to clarify is more or less. Is there a chance that this will go upstream, because if not we can just design it for our internal use-cases and implement it on a fork. Of course I would rather like to implement it upstream

lingxiankong commented 4 years ago

@sbueringer I still have concerns that this feature may break the stability of openstack cloud controller manager, as you said, lots of refactoring is needed. But thank you so much for bringing this discussion.

hamzazafar commented 4 years ago

@sbueringer Just like other people, I am also very much interested in this feature. I think we should refactor code base in small chunks, in this way we can detect issues earlier.

@lingxiankong what do you think ?

lingxiankong commented 4 years ago

I think we should refactor code base in small chunks

I think this could work

rochaporto commented 4 years ago

@sbueringer do you already have some code for this? We're very interested in this feature and have some downstream prototype code.

sbueringer commented 4 years ago

@rochaporto We don't have any code yet, but we wanted to start developing it at some point in the next few weeks when we get to it. We would definitely be interested to take a look at your code if possible :)

/cc @chrischdi

lingxiankong commented 4 years ago

FYI, please see a draft design for this feature at https://github.com/kubernetes/cloud-provider-openstack/pull/1118#issuecomment-665938534

fejta-bot commented 3 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot commented 3 years ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten

chrischdi commented 3 years ago

/remove-lifecycle rotte

fejta-bot commented 3 years ago

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close

k8s-ci-robot commented 3 years ago

@fejta-bot: Closing this issue.

In response to [this](https://github.com/kubernetes/cloud-provider-openstack/issues/933#issuecomment-751438969): >Rotten issues close after 30d of inactivity. >Reopen the issue with `/reopen`. >Mark the issue as fresh with `/remove-lifecycle rotten`. > >Send feedback to sig-testing, kubernetes/test-infra and/or [fejta](https://github.com/fejta). >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
chrischdi commented 3 years ago

/reopen /remove-lifecycle rotten

k8s-ci-robot commented 3 years ago

@chrischdi: Reopened this issue.

In response to [this](https://github.com/kubernetes/cloud-provider-openstack/issues/933#issuecomment-784981631): >/reopen >/remove-lifecycle rotten Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
fejta-bot commented 3 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale

fejta-bot commented 3 years ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten

k8s-triage-robot commented 3 years ago

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-contributor-experience at kubernetes/community. /close

k8s-ci-robot commented 3 years ago

@k8s-triage-robot: Closing this issue.

In response to [this](https://github.com/kubernetes/cloud-provider-openstack/issues/933#issuecomment-886996601): >Rotten issues close after 30d of inactivity. >Reopen the issue with `/reopen`. >Mark the issue as fresh with `/remove-lifecycle rotten`. > >Send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
rissson commented 3 years ago

I think this is still relevant, as many people expressed their interest for it.

lingxiankong commented 3 years ago

I think this is still relevant, as many people expressed their interest for it.

Thanks for your interest, I will continue implementing this feature.

lingxiankong commented 3 years ago

/assign

lingxiankong commented 3 years ago

/reopen

lingxiankong commented 3 years ago

/remove-lifecycle rotten