Closed wingyplus closed 3 years ago
I understand the logic of using an NLB for direct service or ingress operations. What would be the benefit with regards to API access?
@gladiatr72 An NLB can be used with AWS Private link. Not many other use cases floating around at the moment but we're thinking trying out Private Link over VPC peering for inter-cluster comms (CI pipeline requires comms to the k8s API servers).
@wingyplus an alternative at the moment is a custom svc to the k8s api servers on the master nodes on port 443 using the NLB type service definition. Requires k8s version >1.9
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale.
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
It looks like @pluttrell's /remove-lifecycle stale.
didn't take (I assume because of the period at the end).
I'm not too familiar with the pros and cons of this proposal, but it seems like it's still relevant, so I'll try running /remove-lifecycle rotten
/remove-lifecycle rotten
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
I came across this issue as a followup to reading this article about the annoying "idle timeout" feature of classic load balancers which, apparently, NLBs don't have. So that's one point in its favor...
the NLBs have an awful side effect of mutating the security groups of the nodes/masters since the NLB itself cannot have security groups, which means that allowing NLBs at all in your environment all can be dangerous. A developer launching an NLB for their service can accidentally expose your cluster. To that end, we have disabled the use of NLB completely via open policy agent.
On Wed, Aug 7, 2019 at 12:44 PM Eric Herot notifications@github.com wrote:
I came across this issue as a followup to reading this article https://medium.com/redbubble/kubernetes-elb-timeout-blues-59d0867d0a71 about the annoying "idle timeout" feature of classic load balancers which, apparently, NLBs don't have. So that's one point in its favor...
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/kubernetes/kops/issues/5762?email_source=notifications&email_token=AAIWWJRYFPW3OFRJBN23Z5TQDMJXNA5CNFSM4FUDOKNKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD3ZLDQQ#issuecomment-519221698, or mute the thread https://github.com/notifications/unsubscribe-auth/AAIWWJXHS562FDJKUVO7EPTQDMJXNANCNFSM4FUDOKNA .
-- Jake Plimack e. jake.plimack@gmail.com jake.plimack@gmail.com m. 917.838.7685
@kplimack I hear that, but what about just for the Kubernetes API server?
@eherot Im not sure, we spin up an internal [classic]-ELB for the kube api and access it via direct connect.
This works very well for us with since Kops is able to populate it for us https://github.com/kubernetes/kops/blob/master/pkg/apis/kops/instancegroup.go#L144-L145
kind: InstanceGroup
spec:
role: Master
<snip....>
externalLoadBalancers:
- loadBalancerName: {{$.api_internal_elb_name}}
@eherot i gotta know. what is this? also the website is throwing fatal errors. https://github.com/eherot/Giant-Vagina
@kplimack lol was my wife's fine art website before she moved on to doing commissioned pet portraits (consequently I think there may have been some server-side upgrades performed without the needed client updates).
Also, interesting about externalLoadBalancers
although it doesn't look like that's quite analogous to the kubernetes API load balancer....
It registers the targetGroup for the masters with the loadbalancer referenced
On Wed, Aug 7, 2019 at 1:21 PM Eric Herot notifications@github.com wrote:
Also, interesting about externalLoadBalancers although it doesn't look like that's quite analogous to the kubernetes API load balancer....
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/kubernetes/kops/issues/5762?email_source=notifications&email_token=AAIWWJR7TBYQ6DN7FWIAVPTQDMOFFA5CNFSM4FUDOKNKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD3ZOJXA#issuecomment-519234780, or mute the thread https://github.com/notifications/unsubscribe-auth/AAIWWJWVORBC2ZDTO5OR6ITQDMOFFANCNFSM4FUDOKNA .
-- Jake Plimack e. jake.plimack@gmail.com jake.plimack@gmail.com m. 917.838.7685
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close
@fejta-bot: Closing this issue.
Using Network Load Balancers would be preferable to Classic Load Balancers (CLB)
If I understand correctly, using NLB would allow to prevent source IP in audit logs - https://github.com/kubernetes/kubernetes/issues/76928#issuecomment-573278361
We need NLBs too in order to use AWS Global Accelerator. The latter does not support classic load balancers.
The active issue on this topic is #8370
There's a PR for this at https://github.com/kubernetes/kops/pull/9011
Just to clarify, in our case we need to be able to spin off an NLB when the ingress type is set to "Load Balancer". Currently, it creates a Classic LB with changing IPs. On the other hand, the classic LB in front of the k8s cluster API server works fine for us. Please advise me if this is not kops related and should be reported somewhere else.
Kops is only responsible for the load balancer that targets the Kubernetes API Server. Any load balancers that target your services in the cluster would be handled by other components outside the scope of Kops, likely cloud-provider-aws.
@demisx in this case you should reach SIG (not sure which one covers ingress / aws, please help here).
thanks @rifelpet !
Got it. Sorry, for posting off topic. Wasn't clear on ELB responsibilities.
/reopen /remove-lifecycle rotten /milestone v1.19
@rifelpet: Reopened this issue.
Kops 1.19.0-beta.2 added support for using an NLB as the API load balancer. There is more information in the release notes and docs, and a few bug fixes that will be in future 1.19 releases including cleaning up the old LB during a migration on an existing cluster, but please try out the new functionality and provide us with any feedback. thanks!
/close
@rifelpet: Closing this issue.
Currently, Kops provision by using Classic Load Balancer and add health check only SSL:443. It would be good if Kops can change it to Network Load Balancer.
I looking into CreateLoadBalancer API. It needs to create Target Group and attach instances to them. For changes in Kops, This will be change CreateLoadBalancer in https://github.com/kubernetes/kops/blob/master/upup/pkg/fi/cloudup/awstasks/load_balancer.go#L465 to support ELBv2 and changes terraform and cloud formation generator.
Love to hear feedback.