Closed Gerrit-K closed 6 days ago
@Gerrit-K: This issue is currently awaiting triage.
If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted
label and provide further guidance.
The triage/accepted
label can be added by org members by writing /triage accepted
in a comment.
/remove-kind bug
The expectation here is not detailed enough to explain the real-world use case reasoning. For example, a user configures the canary feature to send traffic to a pod in the cluster, with a clear determination on "canary" deployment. With such a determined destination, introducing a redirect, and that too a temporal-redirect, is a change of the determined goal to send traffic to a specifc pod. Why should someone ,first decide to use canary feature, with a clear determination to send traffic to a specific workload. Ad then follow it up with a sabotage of the the config done, towards that goal.
@longwuyuan Fair point. And in that sense I assume my use-case might not be a good one. However, I'm just reporting here that some combination of annotations does not work as someone might expect (at least not as I expected).
My use-case is that I want to temporarily place a maintenance page in front of my app, which "disables" access to that app, with the possibility of still circumventing this to test the app. Of course this is not "canarying" and there are probably way better and more sophisticated and secure approaches to that, but a quite simple one I found was to use the caray feature to redirect traffic based on header or cookie values. More specifically: with the right configuration, I could make all regular users being redirected to that maintenance page, while I can set a specific cookie and still work with the (weakly) protected app.
However, since the maintenance page works a bit differently than the regular app, I also need to modify the path when redirecting, hence the temporal-redirect
(didn't try with rewrite-target
yet, but it's on the list).
As said earlier, I'm aware that this is not a case the canary feature was designed for. I just want to know if those two annotations/feature are intended to be incompatible or if it's really a bug. If the former, a corresponding hint in the documentation could be helpful.
Edit: to make it a bit more clear what I would expect from the implementation:
app
and canary
, except that canary
has these additional annotations:
nginx.ingress.kubernetes.io/canary: 'false'
nginx.ingress.kubernetes.io/canary-by-header: 'canary'
nginx.ingress.kubernetes.io/temporal-redirect: http://foo.bar
canary: always
I would expect to be redirected to http://foo.bar
The project is too short on resources. There is no choice but to focus on implmenting the Gatrway-API & ensure that the controller is secure by default, out of the box. We are also minimizing any effort to maintain or support features that are not closely or implied by the K8S Ingress-API. We have deprecated some long supported features like the tcp/udp forwarding functionality.
So there is not much work possible in implementing such a rare use case like Canary & redirect. In fact its not really a convention to configure Canary deployments while also doing redirects in the same configuration set.
So I am closing this issue for now as it does not track any action item and just adds to the tally of open issues. Thanks.
/close
@longwuyuan: Closing this issue.
What happened:
When deploying two identical ingresses, except for the second one with the following additional annotations:
the
temporal-redirect
annotation doesn't work. I.e., when a request with the canary header hits the canary ingress, it doesn't get redirected tohttps://google.com
but instead is just routed to the backend service as if the annotation was not present.What you expected to happen:
The annotation should work, even if used at a canary ingress.
NGINX Ingress controller version (exec into the pod and run nginx-ingress-controller --version.):
Kubernetes version (use
kubectl version
):Environment:
Cloud provider or hardware configuration: Azure Kubernetes Service
How was the ingress-nginx-controller installed: helm chart, but via Pulumi. Pulumi doesn't use helm directly, but rather renders the manifests of the chart and manages them on its own. The used chart version is
4.4.0
. Values:values
```ts // we're using Pulumi with TypeScript, so this is an object which contains references, but they're likely not relevant for this issue { fullnameOverride: 'nginx', controller: { metrics: { enabled: true, port: metricsPort, service: { annotations: { 'prometheus.io/scrape': 'true', 'prometheus.io/port': `${metricsPort}`, 'prometheus.io/scheme': 'http', }, }, }, ingressClass: ingressClassName, ingressClassResource: { name: ingressClassName, enabled: true, default: true, }, service: { externalTrafficPolicy: 'Local', }, extraArgs: { 'default-ssl-certificate': certificateSecretName, }, priorityClassName: priorityClass.metadata.name, }, } ```How to reproduce this issue: I used RancherDesktop but I assume the steps can also be reproduced on kind/k3s/minikube or any other cluster. I installed the ingress controller as suggested from https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/baremetal/deploy.yaml.
Then I've deployed these two deployments and services (note: I needed to choose a different echoserver due to my ARM architecture):
Deployments and Services
```yaml apiVersion: apps/v1 kind: Deployment metadata: name: production labels: app: production spec: replicas: 1 selector: matchLabels: app: production template: metadata: labels: app: production spec: containers: - name: production image: kicbase/echo-server:1.0 ports: - containerPort: 8080 env: - name: NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: POD_IP valueFrom: fieldRef: fieldPath: status.podIP --- apiVersion: apps/v1 kind: Deployment metadata: name: canary labels: app: canary spec: replicas: 1 selector: matchLabels: app: canary template: metadata: labels: app: canary spec: containers: - name: canary image: kicbase/echo-server:1.0 ports: - containerPort: 8080 env: - name: NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: POD_IP valueFrom: fieldRef: fieldPath: status.podIP --- apiVersion: v1 kind: Service metadata: name: production labels: app: production spec: ports: - port: 80 targetPort: 8080 protocol: TCP name: http selector: app: production --- apiVersion: v1 kind: Service metadata: name: canary labels: app: canary spec: ports: - port: 80 targetPort: 8080 protocol: TCP name: http selector: app: canary ```Those are paired with these two ingress objects:
Ingresses
```yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: production labels: app.kubernetes.io/name: production annotations: kubernetes.io/ingress.class: nginx spec: rules: - host: foo.bar http: paths: - backend: service: name: production port: number: 80 path: / pathType: Prefix --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: canary labels: app.kubernetes.io/name: canary annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/canary: "true" nginx.ingress.kubernetes.io/canary-by-header: "canary" nginx.ingress.kubernetes.io/temporal-redirect: "https://google.com" spec: rules: - host: foo.bar http: paths: - backend: service: name: canary port: number: 80 path: / pathType: Prefix ```Now making a request without the canary header results in the expected response:
Commands & output
``` POD_NAME=$(k get pods -n ingress-nginx -l app.kubernetes.io/name=ingress-nginx -o NAME) kubectl exec -it -n ingress-nginx $POD_NAME -- curl -sSLi -H 'Host: foo.bar' localhost # results in: HTTP/1.1 200 OK Date: Tue, 27 Dec 2022 10:56:55 GMT Content-Type: text/plain Content-Length: 318 Connection: keep-alive Request served by production-b7c6f88d5-6bj9f HTTP/1.1 GET / Host: foo.bar Accept: */* User-Agent: curl/7.83.1 X-Forwarded-For: 127.0.0.1 X-Forwarded-Host: foo.bar X-Forwarded-Port: 80 X-Forwarded-Proto: http X-Forwarded-Scheme: http X-Real-Ip: 127.0.0.1 X-Request-Id: 985b163bdceed8bb299d0c4c5b831402 X-Scheme: http ```But when the header
canary: always
is added, we just get the redirect to the canary pod, but not the one from thetemporal-redirect
annotation:Commands & output
``` POD_NAME=$(k get pods -n ingress-nginx -l app.kubernetes.io/name=ingress-nginx -o NAME) kubectl exec -it -n ingress-nginx $POD_NAME -- curl -sSLi -H 'Host: foo.bar' -H 'canary: always' localhost # results in: HTTP/1.1 200 OK Date: Tue, 27 Dec 2022 10:53:56 GMT Content-Type: text/plain Content-Length: 330 Connection: keep-alive Request served by canary-758dcff8c5-fs2z8 HTTP/1.1 GET / Host: foo.bar Accept: */* Canary: always User-Agent: curl/7.83.1 X-Forwarded-For: 127.0.0.1 X-Forwarded-Host: foo.bar X-Forwarded-Port: 80 X-Forwarded-Proto: http X-Forwarded-Scheme: http X-Real-Ip: 127.0.0.1 X-Request-Id: a3ba406a177b4f3b000cde0bac2908cb X-Scheme: http ```