Closed chrischdi closed 2 years ago
Sorry for advertising this, but it may help you: https://github.com/kayrus/ingress-terraform
@chrischdi, thanks for proposing this feature request.
Just some input. I think we shouldn't force the user to define the loadBalancerIP because he might not know which IPs are available in OpenStack. An alternative solution for the name would be in my opinion to don't set a name and add all associated services to the tags of the LB resources. This would be the easiest from a user perspective.
Just some input. I think we shouldn't force the user to define the loadBalancerIP because he might not know which IPs are available in OpenStack. An alternative solution for the name would be in my opinion to don't set a name and add all associated services to the tags of the LB resources. This would be the easiest from a user perspective.
Or use the metadata/tags for creation of the lb and after knowing the ip address we could rename the loadbalancer
Other alternative: use a second annotation which needs to define a unique name, which then will be shared on all services which should share the loadbalancer.
Just some input. I think we shouldn't force the user to define the loadBalancerIP because he might not know which IPs are available in OpenStack. An alternative solution for the name would be in my opinion to don't set a name and add all associated services to the tags of the LB resources. This would be the easiest from a user perspective.
Or use the metadata/tags for creation of the lb and after knowing the ip address we could rename the loadbalancer
Other alternative: use a second annotation which needs to define a unique name, which then will be shared on all services which should share the loadbalancer.
Or why not simply pass the ID of external loadbalancer in an annotation. Workflow: Create 1st service -> External LB gets created -> Get ID of LB using OpenStack CLI -> Create 2nd service and pass ID of existing LB in an annotation
This requires adding a new function to get loadbalancer by ID.
Some additional things to take care of:
Currently OCCM deletes all obsolete resources (e.g listeners attached to a loadbalancer that don't correspond to any ports defined in a service): https://github.com/kubernetes/cloud-provider-openstack/blob/27b70e3ded626783dbb0d83e1a58dd5cacbed37d/pkg/cloudprovider/providers/openstack/openstack_loadbalancer.go#L1181
In view of a service, all resources (listeners, pools, members etc.) created by other services will be considered as obsolete and eventually deleted.
A lot of refactoring is required to support this feature.
Tags are unfortunately not an option as long as we support Neutron LBaaS v2 and Octavia < 2.5 :/
@lingxiankong @hamzazafar @chrigl
If I/we would commit to implementing this, is there interest in merging this if we come up with a feasible solution?
I think the main problems are:
So what I want to clarify is more or less. Is there a chance that this will go upstream, because if not we can just design it for our internal use-cases and implement it on a fork. Of course I would rather like to implement it upstream
@sbueringer I still have concerns that this feature may break the stability of openstack cloud controller manager, as you said, lots of refactoring is needed. But thank you so much for bringing this discussion.
@sbueringer Just like other people, I am also very much interested in this feature. I think we should refactor code base in small chunks, in this way we can detect issues earlier.
@lingxiankong what do you think ?
I think we should refactor code base in small chunks
I think this could work
@sbueringer do you already have some code for this? We're very interested in this feature and have some downstream prototype code.
@rochaporto We don't have any code yet, but we wanted to start developing it at some point in the next few weeks when we get to it. We would definitely be interested to take a look at your code if possible :)
/cc @chrischdi
FYI, please see a draft design for this feature at https://github.com/kubernetes/cloud-provider-openstack/pull/1118#issuecomment-665938534
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
/remove-lifecycle rotte
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close
@fejta-bot: Closing this issue.
/reopen /remove-lifecycle rotten
@chrischdi: Reopened this issue.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Send feedback to sig-contributor-experience at kubernetes/community. /close
@k8s-triage-robot: Closing this issue.
I think this is still relevant, as many people expressed their interest for it.
I think this is still relevant, as many people expressed their interest for it.
Thanks for your interest, I will continue implementing this feature.
/assign
/reopen
/remove-lifecycle rotten
The binaries affected:
What happened:
type: loadBalancer
having set the samesvc.spec.loadBalancerIP
Warning SyncLoadBalancerFailed 2m32s (x6 over 5m9s) service-controller Error syncing load balancer: failed to ensure load balancer: floating IP X.X.X.X is not available
What you expected to happen:
How to reproduce it:
Anything else we need to know?:
metallb.universe.tf/allow-shared-ip
What get's touched by implementing it:
The creation and reconcile of a loadbalancer by the service name may not work.
We would need to adjust the following code to have a common name for mutliple services for the loadBalancer, or detect the loadBalancer by other loadBalancer metadata: https://github.com/kubernetes/cloud-provider-openstack/blob/master/pkg/cloudprovider/providers/openstack/openstack_loadbalancer.go#L520
We could enforce the need to set a loadBalancerIP when the
ip-address-sharing
annotation is set and resolve the name for the loadBalancer by using the loadBalancerIP by using for example something likekube_service_x-x-x-x
for the loadBalancer nameWe would need some additional validation for e.g.:
There may be other things I didn't consider here.
Environment: