Kong / kubernetes-ingress-controller

:gorilla: Kong for Kubernetes: The official Ingress Controller for Kubernetes.
https://docs.konghq.com/kubernetes-ingress-controller/
Apache License 2.0
2.22k stars 592 forks source link

feature: ability to canary releases #557

Closed hhstu closed 2 years ago

hhstu commented 4 years ago

174 # Summary

SUMMARY_GOES_HERE

Kong Ingress controller version 0.7.1

Kong or Kong Enterprise version 2.0.1

Kubernetes version

1.15..9

What happened

i want to change the each weight of targets by kube-apiserver for “Gray released in computer terms”,not by kong-admin api , i think it is more unified and easy when kong in kubernates.

To be that ,we should more CRD

hbagdi commented 4 years ago

i want to change the each weight of targets by kube-apiserver for “Gray released in computer terms”,not by kong-admin api , i think it is more unified and easy when kong in kubernates.

I think you are referring to Canary releases or traffic splitting to test out a new version that is being rolled out. That functionality is currently Enterprise only but will be Open-Sourced in near future.

Adding targets as CRDs doesn't make much sense since we want to limit the CRDs are rather reuse Kubernetes concepts as much as possible. I can think of a situation where the controller can lookup annotations on the pod and set the weight of the targets accordingly.

primeroz commented 4 years ago

Hi, i am also interested in this for the kong kubernetes ingress-controller

Since i just started testing Kong Community Edition i do not have access to the Canary Plugin, but if do you have any example CRD for configuring the Canary Plugin would you mind sharing it so i can understand a bit how it would work ? ( multiple services vs single service with pod labeled with blu and green for example )

I was also trying to use the weight functionality in the upstream load balancing configuration but i can't see any way to do it with kubernetes Ingress or KongIngress CRD , is that correct ?

I am used to do it with haproxy-ingress ( https://github.com/jcmoraisjr/haproxy-ingress/tree/master/examples/blue-green ) and was hoping i could do something similar where i could label pods and use that label to set a weight in the KongIngress

hbagdi commented 4 years ago

but if do you have any example CRD for configuring the Canary Plugin would you mind sharing it so i can understand a bit how it would work ?

https://docs.konghq.com/hub/kong-inc/canary/

I was also trying to use the weight functionality in the upstream load balancing configuration but i can't see any way to do it with kubernetes Ingress or KongIngress CRD , is that correct ?

Correct.

This is not possible currently but is on our radar to add. However, it is unlikely that a Target CRD is an answer here. We will be reusing the Deployment concept.

On the other hand, this is a topic that is being discussed in Kubernetes SIG Network's service-apis project. We want to standardize blue-green deployment in Kubernetes itself, so that users don't have to learn provider specific configurations.

Stay tuned for updates.

chenjinxuan commented 4 years ago

When is it expected to be supported Can I refer to the annotations of nginx-ingress nginx.ingress.kubernetes.io/canary

for example konghq.com/canary: true konghq.com/canary-weight: 100

max-rocket-internet commented 4 years ago

Can we update the title to be more indicative? If I understand correctly, @hhstu wants to be able to control canary settings, e.g. weight, by using annotation on the resources in k8s?

That functionality is currently Enterprise only

According to this blog post, there is some functionality in the CE version of Kong?

but will be Open-Sourced in near future.

🎉

hbagdi commented 4 years ago

Can we update the title to be more indicative?

updated.

Let's talk a little bit more about this feature to get a better understanding of what is needed. Would having a way to split traffic among different k8s services (based on weight) be sufficient? Or are you looking for a more sophisticated canary feature.

One way you can currently get this is to have a single service backed by two deployments (selector in service can select across multiple deployments). Have a production deployment with 3 replicas and a canary with 1 replica. This will ensure that 25% of your traffic goes to canary.

max-rocket-internet commented 4 years ago

Would having a way to split traffic among different k8s services (based on weight) be sufficient?

Yes, this would be sufficient. More advanced options and ideas can be found here

Have a production deployment with 3 replicas and a canary with 1 replica. This will ensure that 25% of your traffic goes to canary.

This is not granular enough 🙂

hbagdi commented 4 years ago

Yes, this would be sufficient.

Great. That aligns with what we are planning to build out.

More advanced options and ideas can be found here

Yes, I'm aware of how ingress-nginx does it. Thanks for sharing the link. These options will be supported eventually.

This is not granular enough slightly_smiling_face

Totally agree with this. I'm not hand-waving the feature request away but noting a workaround that might work for others.

Kong core requires some changes to support more advanced use cases but we have some ideas of how we can support canary and traffic split features without any changes in Kong core. We will schedule something like this for 0.11.0 or later.

hbagdi commented 4 years ago

This is currently partially blocked on Kong: https://github.com/Kong/kong/issues/6335.

We can however support partial load-balancing based by mixing endpoints from different k8s services into a single upstream and multiple targets with varied weights.

debu99 commented 3 years ago

any update

stale[bot] commented 3 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

shaneutt commented 3 years ago

@debu99 just to give you a heads up on where we stand with this issue:

So far there are several other issues and tasks that have been higher priority, so we do not a timing expectation for when this would get started just yet (we will update here if and when we do). In the meantime, we are open to contributors taking this issue on so if that's something you would be interested in please let us know.

lework commented 2 years ago

When will this feature be added?

shaneutt commented 2 years ago

This feature is not currently on the roadmap for the maintainers (our focus at the time of writing is on Gateway API support which as a potentially relevant side note will enable options for weighted traffic). Given the age of the issue (and its comments) and some uncertainty about the value proposition we're going to consider this closed as "on hold" until we have a more clear rationale for it. If any contributors wish to offer to take this one on we would be happy to re-open and re-asses this and provide help and review for them. If you're a Kong customer desiring this support: we suggest reaching out to your account representative who will work with our product team to assess. For community members who are not Kong customers we would invite you to provide a detailed overview of your use cases, requirements and acceptance criteria for this feature as this will help us better understand the community need and help us to re-assess and potentially prioritize this issue.

ebarped commented 2 years ago

It seems that every "known" apigateway/service mesh has integration with flagger except kong:

+info: https://flagger.app/#progressive-delivery

Also, there is a canary plugin but enterprise only, so i think that the logic is already implemented but not delivered to the Open Source version... :disappointed:

shaneutt commented 2 years ago

As was said previously we (the maintainers) are still open to suggestion in regards to whether we would accept this feature request, but we have to weigh and balance the features and maintenance on our roadmap because time is simply a finite thing. Pointing out that other implementations have a flagger integration is interesting, but it's hard to quantify how much weight that alone adds to the issue to prioritize it over anything else. What would help add more weight would be a detailed feature request including user stories and use cases, plus some clear illustration of the value add of this integration, in depth and with specific acceptance criteria. If you have the time to write that up (and since this issue is quite old, and didn't originally mention flagger) I would encourage you to create a separate new feature request specifically for the flagger integration. If you're a Kong customer I would additionally recommend that you then share a link to that issue with your account representative, as that would add further weight to the issue.