kubernetes / kops

Kubernetes Operations (kOps) - Production Grade k8s Installation, Upgrades and Management
https://kops.sigs.k8s.io/
Apache License 2.0
15.92k stars 4.65k forks source link

[D]Dos protection (Cloudfront) #5723

Closed TattiQ closed 4 years ago

TattiQ commented 6 years ago

1. Describe IN DETAIL the feature/behavior/change you would like to see.

Dear Kops maintainers and users,

I am relatively new to kops so please point me to the documentation in case this stuff is already described somewhere.
How does one go about setting up some dos/ddos protection for a website running in kops?
The question is e.g. if we start with the basics and want to set up rate limiting, what's the best practise to do it ? Currently a website can be exposed as an ingress resource (ingress_class: public) , and ingress service could have an annotation service.beta.kubernetes.io/aws-load-balancer-ssl-cert defined, which points to the aws certificate request id. A bunch of other websites can be tied to the same cert request and run in the same kops cluster. 2. Feel free to provide a design supporting your feature request. One might think rate limiting can be set up as ingress annotations either per ingress resource or globally as a config map https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md#rate-limiting (still not sure though how does one do it if you want to limit access to a certain path e.g. /login as it's possible in the good old nginx https://www.nginx.com/blog/rate-limiting-nginx/ ). But what if we want to leverage cloudfront distribution to place it before the website ? Do we need to set the ELB created by kops as the origin for the cloudfront and what about the waf acl's , e.g. multiple websites can be behind that elb and I can choose only one AWS WAF Web ACL per cloudfront distribution. Has anyone done that before? I think it's a good idea to document this or at least post official point of view on this matter.

Thank you in advance.

fejta-bot commented 5 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot commented 5 years ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten

dennisotugo commented 5 years ago

/remove-lifecycle rotten

fejta-bot commented 5 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

dennisotugo commented 5 years ago

/remove-lifecycle rotten

dennisotugo commented 5 years ago

/remove-lifecycle stale

fejta-bot commented 5 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot commented 5 years ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten

dennisotugo commented 5 years ago

/remove-lifecycle rotten

dennisotugo commented 5 years ago

/remove-lifecycle stale

fejta-bot commented 4 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot commented 4 years ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten

fejta-bot commented 4 years ago

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close

k8s-ci-robot commented 4 years ago

@fejta-bot: Closing this issue.

In response to [this](https://github.com/kubernetes/kops/issues/5723#issuecomment-575127244): >Rotten issues close after 30d of inactivity. >Reopen the issue with `/reopen`. >Mark the issue as fresh with `/remove-lifecycle rotten`. > >Send feedback to sig-testing, kubernetes/test-infra and/or [fejta](https://github.com/fejta). >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.