Closed nikParasyr closed 2 years ago
@iamemilio thoughts? (Although this doesn't currently affect OpenShift)
My initial thought is that option 2 above would be preferable. My concern about option 1 is that it's potentially too simplistic. Potential use cases I can think of:
However, if we were confident that we could cover all reasonable use cases without too much complexity it might be worth considering.
User can define a list with allowed CIDRs on OpenStackCluster.
conceptually this seems reasonable as OpenStackCluster
is the entity define the cluster itself which means API block/allow list belongs to it well suited. Stein is a pretty old version so I think we are good enough to add such requirement (maybe tolerance on Stein- version to say it's not supported but not block)
not sure if LBaaS is supported instead of octavia
Octavia is the future and I am not sure we even tested LBaas at all
@nikParasyr Neutron LBaaS v2 support was already dropped: https://github.com/kubernetes-sigs/cluster-api-provider-openstack/pull/813
Thanks for the feedback :)
I see that option 2 and 4 are mentioned, which are also the ones that make more sense. There are some differences between the two approaches. Both of them should be rather easy to implement and maintain. I'd like some more feedback in regards with their differences mentioned below, before i start implementing this.
case 4, "add a allowedCidrs to listener.CreateOpts": This requires octavia 2.12. From the feedback this doesn't seem to be an issue necessarily and it can be added in a way that when it's not defined it will not add it, so users with lower octavia versions can keep using CAPO, by not setting this option. Functionally, this translates to HAproxy white/blacklist on the default octavia provider(amphora). I'm not sure how and if it works with other octavia providers. I don't know how much CAPO is worried about different providers or it sticks with "vanilla" openstack.
case 2, "associate a list of user created SGs to LBs vip_port after creation": This should work regardless of octavia version and/or provider. Also functionally it is a bit different as packages are dropped by neutron before they even reach the LB, in some sense this can be viewed as "stronger"/"better".
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
@nikParasyr as the current occm
implementation is doing case 4
(https://github.com/kubernetes/cloud-provider-openstack/blob/master/pkg/openstack/loadbalancer.go#L1467-L1470) i would prefer this implementation as the underlying openstack objects (CAPO managed LB and OCCM managed LB) behave/look more or less the same. WDYT?
are you still interested in the implementation or is this something we could take over from you?
@bavarianbidi hello.
@nikParasyr as the current occm implementation is doing case 4 (https://github.com/kubernetes/cloud-provider-openstack/blob/master/pkg/openstack/loadbalancer.go#L1467-L1470) i would prefer this implementation as the underlying openstack objects (CAPO managed LB and OCCM managed LB) behave/look more or less the same. WDYT?
i also think that case 4
is the way to go forward. it's the official vanilla openstack
way of doing it.
Couple of notes/thoughts i had:
are you still interested in the implementation or is this something we could take over from you?
Unfortunately I haven't had much time to implement it and i don't see this changing the coming couple of months, so it would be really nice if you can pick it up :+1:
hey @nikParasyr i will work on it.
This are my first thoughts about it.
spec
adding a List to the APIServerLoadBalancer
struct
type APIServerLoadBalancer struct {
// Enabled defines whether a LoadBalancer should be created.
Enabled bool `json:"enabled,omitempty"`
// AdditionalPorts adds additional tcp ports to the Loadbalacner
AdditionalPorts []int `json:"additionalPorts,omitempty"`
+++ // add allowed CIDRs
+++ AllowedCIDRs []string `json:"allowedCidrs,omitempty"`
}
6443
it's possible to define multiple listeners via additionalPorts
should we apply the firewall policy to all listeners for now or only to the api server?
Is it a valid use-case to have different allowedCirds
per listener (future impl.)
How much "auto discovery" is needed/valid?
During cluster-creation a user only knows a few IPs (e.g. IP of a central bastion host).
Is it safe to get the router ip from the external_gateway_info
field of a router (CAPO cluster and target cluster) if exists or
should a admin/user know/define these IPs by it's own (as there are many OS deployments where the router-ip can't be found in the external_gateway_info
field)
If we have the IP auto discovery, should we write the additional IPs back into the Spec or does a new status field make sense
Update 2022-04-25:
For now i will continue by adding a new status
field which holds the information about the router/NAT IP of the target cluster (could be easily fetched).
Discovering the IP of the management Cluster is much more trickier than i initially thought:
cluster
name of the management cluster to either talk to the OpenstackAPI for this specifc tenant or going down the resources of this cluster to get the outgoing IP address.I would recommend to have this field not immutable as the IP restriction might change over the lifetime of an openstack cluster.
cc: @chrischdi / @tobiasgiese / @seanschneeweiss as you might also have an opinion about this
One thing about wether writing back to spec or status: IMHO writing back to spec is a bad pattern and would break/not work when using ClusterClass. CAPA already encountered an issue there if I remember correctly.
Having a field in status too may be good to prevent unnecessary API calls so the controller would be able to do the diff and only do calls to Openstack if there had been a change to the Cr itself.
/kind feature
Hello,
I find having CAPO managing the ApiServerLB a very good feature that simplifies a lot of stuff in our case. That being said the k8s api is open from everywhere which is not ideal in our case. Currently the only way to have the ApiServerLB with limited access is to create it yourself, which means also creating network, subnet, router, pool etc etc which is a bit of extra work and adds an additional step in creation and deletion of clusters. I'd like to have a way to limit access to the ApiServer without having to build it myself. I'm not sure if this is something you think that CAPO should support but im mentioning some potential approaches below.
Describe the solution you'd like Possible approaches:
Anything else you would like to add: