Closed rastislavs closed 2 days ago
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
We currently plan to implement this feature and spent the last few days on how to achieve this. Our main goal is to deploy a (or two) OVN load balancer with dual stack support and health monitors via one k8s service. Everything except the "true" dual stack support is already working.
I first looked into the code how much work has to be done in order to support true dual stack, but since the code is built with single stack in mind and only one load balancer per service, a huge refactor would be necessary to support having two load balancers (IPv4 and IPv6) per k8s service. We then quickly decided to look for other approaches.
A promising solution was to add two listener VIPs to the load balancer. The downside would be to require the Octavia API 1.26 or later. While testing, we discovered that it is indeed possible to add IPv4 and IPv6 VIPs to the OVN load balancer, but as soon as we tried to add IPv6 members to the already added IPv4 members we got the following error:
Provider 'ovn' does not support a requested option: OVN provider does not support mixing IPv4/IPv6 configuration within the same Load Balancer. (HTTP 501) (Request-ID: req-c11d7b3f-b94b-40e9-9b52-86fe35676a3b)
(At least on paper this approach should work with Amphora but as I said our goal is to have support for OVN - and for the others)
Since the possible solution with only one load balancer failed, we only have the option to rework the loadbalancer.go
, but in order to have a acceptable PR we agreed upon not introducing a breaking change. This means we have to carefully think about how to handle the currently existing annotations like loadbalancer.openstack.org/load-balancer-id
or loadbalancer.openstack.org/port-id
.
We thought of two fundamental options. Either replacing the existing annotations with something like loadbalancer.openstack.org/load-balancer-id-ipv4
and loadbalancer.openstack.org/load-balancer-id-ipv6
or keeping the annotations and using a comma separated list. The later one would leave us with the problem that we can not safely say which id corresponds to which protocol and what should we do with three or more entries. In the case of the load balancer id it may be possible to look into the main VIP to distinguish which one corresponds to which protocol, but when it comes to the port id we do not have a main subnet and thus if both IPv4 and IPv6 subnets are linked to the port, we can not say with certainty for which protocol the port should be used and relying on a ordered list within the annotation is not very user friendly. Therefore we decided that this option is not very promising.
The alternative, replacing existing annotations, is in its raw form not much better. In case we really replace the existing annotations we would introduce a breaking change even if we migrate from the "old" annotations to the new one, because any application or script relying on the "old" annotations must be overhauled in order to support the new and possibly also the "old" ones. This means we must keep the "old" annotation, but when we do and also add the new ones how to map between those two. The initial migration (adding the new annotations) would be easy, but what happens if the annotations were edited. How to know if the "old" annotation or the new one has been altered when a manifest gets updated or what should happen if both have been changed. Also the same problem as with the comma separated list arises (after the new annotations have been added). In case the "old" annotation has been altered, how to know if the value corresponds to IPv4 or IPv6.
Therefore simply replacing the existing annotations or adding the new ones is also not an option, but replacing them in a smart way could be a solution to go with. With replacing in a smart way I would propose introducing versioning to the annotations. This means, when finding a load balancer service with only the "old"/v1
annotations occm should behave unchanged, but when we see a service with the v2
annotations occm should handle them in a new way. In case of having both annotations occm should fail and do nothing. This way no breaking change would be introduced for existing resources. The last problem is, how to handle newly created k8s load balancer services if no additional annotations have been supplied. Always falling back to v1
would mean that true dual stack is only possible by adding at least one v2
annotation which probably implies creating an OpenStack resource by hand. On the other side if we choose v2
as the fallback option, existing deployment tools and scripts relying on the v1
annotations will break when creating new resources. Of course we could look at the spec.ipFamilies
values and use v2
if it contains two entries and v1
otherwise, but I do not think that most of the tools/scripts/persons do set this entry deliberately and thus the decision which version to use depends on what k8s writes into the field and that is not superior to us choosing a fallback version.
This problem could be mitigated when adding an additional annotation like loadbalancer.openstack.org/occm-api-version
. If the annotation is missing we add it based on the existing annotations. In case annotations of both versions exist occm should fail and do nothing. In order to migrate v1
to v2
it should be possible to only alter the version annotation and occm then migrates the v1
annotations by replacing them with the v2
ones and possibly create the missing load balancer if dual stack is requested (rolling back should not be an option in my opinion). Last but not least the question remains what to do with new services which do not add any annotation. This question is similar to the one above where we did not have a version annotation. Falling back to v1
would be the least breaking one and existing deployment tools and scripts should work without any patches (at least to support v1
). Falling back to v2
would have the benefit of supporting true dual stack without the need of specifying any annotation, but would possibly break deployment tools and scripts. I personally like having v2
as default, but I do not think that this would be a wise decision and therefore would suggest falling back to v1
.
Our team did discuss this issue and we think we have a solution which could add true dual stack without introducing a breaking change. One question remains with the solution we found which should be discussed. I am also open to discuss this issue in a more synchronous way. The university I am working for runs a fleet of BigBlueButton servers (open source alternative to Zoom/Teams) which can be used to schedule a meeting on. Of course the findings should then be recorded here.
@dulek could you please reopen this issue?
Our proposal for the service annotations v2
would be as follows:
const (
// [...] some other global constants
ServiceAnnotationOCCMAPIVersion = "loadbalancer.openstack.org/occm-api-version"
)
const (
// [...] the unchanged v1 annotations (except for their constant (var) names)
)
const (
// Common options
ServiceAnnotationV2LoadBalancerConnLimit = "loadbalancer.openstack.org/connection-limit"
ServiceAnnotationV2LoadBalancerClass = "loadbalancer.openstack.org/class"
ServiceAnnotationV2LoadBalancerProxyEnabled = "loadbalancer.openstack.org/proxy-protocol"
ServiceAnnotationV2LoadBalancerNetworkID = "loadbalancer.openstack.org/network-id"
ServiceAnnotationV2LoadBalancerTimeoutClientData = "loadbalancer.openstack.org/timeout-client-data"
ServiceAnnotationV2LoadBalancerTimeoutMemberConnect = "loadbalancer.openstack.org/timeout-member-connect"
ServiceAnnotationV2LoadBalancerTimeoutMemberData = "loadbalancer.openstack.org/timeout-member-data"
ServiceAnnotationV2LoadBalancerTimeoutTCPInspect = "loadbalancer.openstack.org/timeout-tcp-inspect"
ServiceAnnotationV2LoadBalancerXForwardedFor = "loadbalancer.openstack.org/x-forwarded-for"
ServiceAnnotationV2LoadBalancerFlavorID = "loadbalancer.openstack.org/flavor-id"
ServiceAnnotationV2LoadBalancerAvailabilityZone = "loadbalancer.openstack.org/availability-zone"
// ServiceAnnotationV2LoadBalancerEnableHealthMonitor defines whether to create health monitor for the load balancer
// pool, if not specified, use 'create-monitor' config. The health monitor can be created or deleted dynamically.
ServiceAnnotationV2LoadBalancerEnableHealthMonitor = "loadbalancer.openstack.org/enable-health-monitor"
ServiceAnnotationV2LoadBalancerHealthMonitorDelay = "loadbalancer.openstack.org/health-monitor-delay"
ServiceAnnotationV2LoadBalancerHealthMonitorTimeout = "loadbalancer.openstack.org/health-monitor-timeout"
ServiceAnnotationV2LoadBalancerHealthMonitorMaxRetries = "loadbalancer.openstack.org/health-monitor-max-retries"
ServiceAnnotationV2LoadBalancerHealthMonitorMaxRetriesDown = "loadbalancer.openstack.org/health-monitor-max-retries-down"
// revive:disable:var-naming
ServiceAnnotationV2TlsContainerRef = "loadbalancer.openstack.org/default-tls-container-ref"
// revive:enable:var-naming
// IPv4 only options
ServiceAnnotationV2LoadBalancerInternalIPv4 = "beta.ipv4.loadbalancer.openstack.org/internal"
ServiceAnnotationV2LoadBalancerFloatingNetworkIDIPv4 = "ipv4.loadbalancer.openstack.org/floating-network-id"
ServiceAnnotationV2LoadBalancerFloatingSubnetIPv4 = "ipv4.loadbalancer.openstack.org/floating-subnet"
ServiceAnnotationV2LoadBalancerFloatingSubnetIDIPv4 = "ipv4.loadbalancer.openstack.org/floating-subnet-id"
ServiceAnnotationV2LoadBalancerFloatingSubnetTagsIPv4 = "ipv4.loadbalancer.openstack.org/floating-subnet-tags"
ServiceAnnotationV2LoadBalancerKeepFloatingIPIPv4 = "ipv4.loadbalancer.openstack.org/keep-floatingip"
// IPv4 options
ServiceAnnotationV2LoadBalancerPortIDIPv4 = "ipv4.loadbalancer.openstack.org/port-id"
ServiceAnnotationV2LoadBalancerSubnetIDIPv4 = "ipv4.loadbalancer.openstack.org/subnet-id"
ServiceAnnotationV2LoadBalancerMemberSubnetIDIPv4 = "ipv4.loadbalancer.openstack.org/member-subnet-id"
ServiceAnnotationV2LoadBalancerLoadbalancerHostnameIPv4 = "ipv4.loadbalancer.openstack.org/hostname"
ServiceAnnotationV2LoadBalancerAddressIPv4 = "ipv4.loadbalancer.openstack.org/load-balancer-address"
ServiceAnnotationV2LoadBalancerIDIPv4 = "ipv4.loadbalancer.openstack.org/load-balancer-id"
// IPv6 options
ServiceAnnotationV2LoadBalancerPortIDIPv6 = "ipv6.loadbalancer.openstack.org/port-id"
ServiceAnnotationV2LoadBalancerSubnetIDIPv6 = "ipv6.loadbalancer.openstack.org/subnet-id"
ServiceAnnotationV2LoadBalancerMemberSubnetIDIPv6 = "ipv6.loadbalancer.openstack.org/member-subnet-id"
ServiceAnnotationV2LoadBalancerLoadbalancerHostnameIPv6 = "ipv6.loadbalancer.openstack.org/hostname"
ServiceAnnotationV2LoadBalancerAddressIPv6 = "ipv6.loadbalancer.openstack.org/load-balancer-address"
ServiceAnnotationV2LoadBalancerIDIPv6 = "ipv6.loadbalancer.openstack.org/load-balancer-id"
)
As one may notice to be consistent with the other annotations I renamed the service.beta.kubernetes.io/openstack-internal-load-balancer
v1
annotation to beta.ipv4.loadbalancer.openstack.org/internal
. I am not sure if this is a good change since it is referenced in the kubernetes docs (within the OpenStack tab), but this way it is obvious it only applies to IPv4 (more or less).
The internal
option itself is a bit problematic since it should make the load balancer only accessible within the OpenStack project, but with the support of IPv6 this is not really the case any more. When setting the internal
option to true
no floating ip will be created and the load balancer can not be shared between multiple services. The current IPv6 implementation does in order to prevent the creation of a floating ip set the option to true
which has the side effect that the load balancer can not be shared. This I would fix, but what I can not fix is that IPv6 can never be internal unless the network will not be routed to the internet and that is not in the scope/responsibility of OCCM. Thus this option should be reworked and yes this might be a separate issue but in case this versioning will be accepted it would be good if v2
does not have this issue, otherwise a v3
would probably be needed soon after.
My suggestion would be to use instead of an internal
annotation an annotation like create-floating-ip
and in the global config the internal
option should also be dropped and replaced by allow-creating-floating-ip
, allow-ipv4
, and allow-ipv6
. But as I said, such a discussion should probably take place in another issue.
@dulek @jichenjc @kayrus @zetaab Can anyone please look at this issue? No rush, I only think that you did not get noticed about the comments I made, since it is a closed issue.
I'll take a look later. It's a big task, and it looks like it cannot be shipped to 1.29 release. First of all we need to fix some lbaas related bugs.
I came to the same conclusion the more I looked into the code and what has too be done/what decisions have to be made.
Yep, this is still valid. I've looked through your comments @ProbstDJakob, very good stuff.
So you're looking at creating 2 separate LBs for each dual-stack LB Service. This is something we've looked at too, but not in so detailed manner.
One thing to consider is that I think we can make an assumption that order of the LB IDs and port IDs on the annotation will match order of what's on the Service's ipFamilies
, because it's only mutable in a way we could support:
This field is conditionally mutable: it allows for adding or removing a secondary IP family, but it does not allow changing the primary IP family of the Service.
In case of the confusion of old and new parameters, I think this piece about K8s API might apply to us: https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api_changes.md#making-a-singular-field-plural. TLDR: You need clients to specify both old and new field correctly in order for this to work in a consistent way.
Sorry for the late response. The topic Making a singular field plural is more or less the same as with what we came up with with the exception, correct me if I am wrong, that we do not see a history. If a change to a manifest happens, we only get the latest manifest version but not what has changed or the version before (if it is an update). If that is the case, it would make some parts of the k8s doc hard to realize.
Until now we did not further investigate the issue or wrote some code, but we are still interested in getting this done, thus are willing to contribute as soon as some decisions have been made.
/remove-lifecycle rotten
How can we progress this issue? Is there any way i can help as i have dual stack running across my network and cloud?
That would be great to test. We currently do not have a dual stack cluster. If I have some spare time I would rewrite the CI/CD to deploy our testing cluster with dual stack, but I do not know when I have time since there is little benefit now.
I think the first/next step is to decide on how to implement this, but that is probably nothing we can do.
Sorry for the late response. The topic Making a singular field plural is more or less the same as with what we came up with with the exception, correct me if I am wrong, that we do not see a history. If a change to a manifest happens, we only get the latest manifest version but not what has changed or the version before (if it is an update). If that is the case, it would make some parts of the k8s doc hard to realize.
This thing is about CRD versions, so I guess previous version is the history you have access to.
Until now we did not further investigate the issue or wrote some code, but we are still interested in getting this done, thus are willing to contribute as soon as some decisions have been made.
Doesn't seem like anyone else wants to chime in, so I'll play a dictator here. For the internal option - let's keep it but document it is IPv4-only. If it's set for an IPv6 or dual stack, we'll emit a warning event on the Service.
As for annotation versioning, I think the canonical way to do this is to add both versions of the informative annotations, but for the functional ones - new versions always takes precedence. I.e. if v1 and v2 annotations are set, only v2 are taken into account when creating the LB. That way if someone is using a toolchain expecting v1, then they are okay. If you want new features, then you need your toolchain fully upgraded.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
Sorry for the late response. I am currently a bit overwhelmed with work and have to write a master thesis. Thus I am currently not able to pursue this issue (or more precisely write code/answer in time) until early next year. If someone else is willing to take over, I am happy to assist (as time permits). Also, our testing cluster now has support for dual stack and can freely be re-deployed at any time (fully automatically) which then can be used to test changes to the OCCM.
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
/kind feature
In https://github.com/kubernetes/cloud-provider-openstack/issues/1897 (PR https://github.com/kubernetes/cloud-provider-openstack/pull/1901) we added initial support for dual-stack k8s services.
That implementation has a limitation: If two address families are specified in service's
spec.ipFamilies
, OCCM will create only one IPv4 or IPv6 load balancer - based on the first specified address family.The aim of this task is to add full dual-stack support, which would create 2 load-balancers in case of two address families in the service's spec.