Closed TattiQ closed 4 years ago
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
/remove-lifecycle rotten
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle rotten
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
/remove-lifecycle rotten
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close
@fejta-bot: Closing this issue.
1. Describe IN DETAIL the feature/behavior/change you would like to see.
Dear Kops maintainers and users,
I am relatively new to kops so please point me to the documentation in case this stuff is already described somewhere.
How does one go about setting up some dos/ddos protection for a website running in kops?
The question is e.g. if we start with the basics and want to set up rate limiting, what's the best practise to do it ? Currently a website can be exposed as an ingress resource (ingress_class: public) , and ingress service could have an annotation service.beta.kubernetes.io/aws-load-balancer-ssl-cert defined, which points to the aws certificate request id. A bunch of other websites can be tied to the same cert request and run in the same kops cluster. 2. Feel free to provide a design supporting your feature request. One might think rate limiting can be set up as ingress annotations either per ingress resource or globally as a config map https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md#rate-limiting (still not sure though how does one do it if you want to limit access to a certain path e.g. /login as it's possible in the good old nginx https://www.nginx.com/blog/rate-limiting-nginx/ ). But what if we want to leverage cloudfront distribution to place it before the website ? Do we need to set the ELB created by kops as the origin for the cloudfront and what about the waf acl's , e.g. multiple websites can be behind that elb and I can choose only one AWS WAF Web ACL per cloudfront distribution. Has anyone done that before? I think it's a good idea to document this or at least post official point of view on this matter.
Thank you in advance.