jcmoraisjr / haproxy-ingress

HAProxy Ingress
https://haproxy-ingress.github.io
Apache License 2.0
1.04k stars 269 forks source link

Annotation support: rewrite-target #55

Closed ausmith closed 6 years ago

ausmith commented 7 years ago

Kubernetes ingress controllers supports the annotation ingress.kubernetes.io/rewrite-target. This is a valuable annotation to support in microservice architectures.

Example: I have 2 services, an API and a telemetry service, both on myapi.domain.com. The path setup in the kubernetes ingress definition for the API is /api, while /telemetry is used for the telemetry service. We then might re-use that same telemetry service with a different domain but for some reason the path has to be /stats instead of /telemetry. As such, the telemetry service has been coded to expect inbound traffic on the path / and the rewrite-target is used to overwrite /telemetry and /stats to /.

We believe (and are testing) that the resulting haproxy configuration should likely be something like:

reqrep ^([^\ :]*)\ /telemetry/(.*)     \1\ /\2  if { path_beg /telemetry }
use_backend out-telemetry-80 if { path_beg /telemetry }

We have already eliminated regsub from the running for this config as it does not seem to support capture groups (does not allow closing parentheses in the regex). Thoughts?

ausmith commented 7 years ago

The following rewrite target options are available in master as of #57 getting merged.

rewrite target ingress path request path output
/ /abc /abc /
/ /abc /abc/ /
/ /abc /abc/x /x
/y /abc /abc /y
/y /abc /abc/ /y/
/y /abc /abc/x /y/x
/ /abc/ /abc 404
/ /abc/ /abc/ /
/ /abc/ /abc/x /x

That meets my business needs for this annotation as described in the original issue. @jcmoraisjr should we leave this issue open for handling annotations with a trailing slash or open a separate issue for that?

jcmoraisjr commented 7 years ago

No need to create a new one, this very issue is enough.

elideveloper commented 6 years ago

There is a problem if a requested path contains some repeated value. Rewrite target "/" just cuts it down double time. For instance: request is "/person/person/find" we need to leave "/person/find" but get just "/find" the same with "/p/person/find" -> "/erson/find". So, is it a valid behaviour and how to deal with it? our config: apiVersion: extensions/v1beta1 kind: Ingress metadata: name: haproxy annotations: ingress.kubernetes.io/rewrite-target: "/" kubernetes.io/ingress.class: "haproxy" spec: rules:

ausmith commented 6 years ago

@elideveloper interesting results, clearly not on the above matrix of test cases we verified. At present it does 2 reqreps to appropriately handle cases with and without content following the path defined by the ingress config. https://github.com/jcmoraisjr/haproxy-ingress/blob/master/rootfs/etc/haproxy/template/haproxy.tmpl#L309-L310

So if you have an ingress path defined as /person and a request comes in for /person then the second reqrep triggers.

If you have an ingress path defined as /person and a request comes in for /person/find then the first reqrep triggers.

As such, I don't consider your scenario expected (either of them) so we probably need to test for those going forward. Up to @jcmoraisjr if that should be a separate bug issue or stay here?

jcmoraisjr commented 6 years ago

There is a pending improvement detailed on #57 which wasn't implemented yet, below is a ctrl+V. I've already a proposal to this and perhaps could fix this behavior. No need to create a new issue, I'll notify here as soon as the changes are pushed to master.


... [an] approach (not tested, just the idea):

... {{ $location.Path }}(.*$) \1\ {{ $rewriteTarget }}{{ if and (ne $rewriteTarget "/") (endsWith $location.Path "/")}}/{{ end }}\2 if...

in order to:

endsWith would be a new func to be implemented (template.go)

jcmoraisjr commented 6 years ago

@elideveloper Changed the approach on 7a86ac9 and perhaps fixed this issue.

@ausmith Let me know if I broke something.

Updating snapshot tag with this change within the next two days.

elideveloper commented 6 years ago

@jcmoraisjr, ok, thanks for fixing!

ausmith commented 6 years ago

@jcmoraisjr I'm sorry but I do not have the bandwidth to test this in the cluster I manage at work right now. I have managed to get prioritized some time this quarter to work on the ingress improvements discussed here and in #58, but I don't see that happening in November sadly. I will mention here when I get time to do that work.

jcmoraisjr commented 6 years ago

No problem, take your time. Please close this issue when you get the time to verify if everything is still working on your environment.

jcmoraisjr commented 6 years ago

Closing. Feel free to post here any issue regarding the rewrite annotation.

NickMRamirez commented 6 years ago

This is not working for me using the latest version. My ingress-controller is set up as follows:

---
  apiVersion: v1
  kind: Namespace
  metadata:
    name: ingress
---
  apiVersion: v1
  kind: ServiceAccount
  metadata:
    name: ingress-controller
    namespace: ingress
---
apiVersion: apps/v1
kind: Deployment 
metadata:
  name: ingress-default-backend-deployment 
  namespace: ingress
  labels:
    app: defaultserver
spec: 
  replicas: 1 
  selector: 
    matchLabels:
      app: defaultserver
  template: 
    metadata:
      labels:
        app: defaultserver
      namespace: ingress
    spec: 
      containers:
        - name: ingress-default-backend-pod 
          image: gcr.io/google_containers/defaultbackend:1.0 
          ports:
            - containerPort: 8080

---
apiVersion: v1
kind: Service
metadata:
  name: ingress-default-backend-service
  namespace: ingress
  labels:
    app: defaultserver
spec:
  ports:
  - port: 80
    targetPort: 8080
    protocol: TCP
  selector:
    app: defaultserver

---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    run: haproxy-ingress
  name: haproxy-ingress
  namespace: ingress
spec:
  replicas: 1
  selector:
    matchLabels:
      run: haproxy-ingress
  template:
    metadata:
      labels:
        run: haproxy-ingress
      name: haproxy-ingress-pod
      namespace: ingress
    spec:
      serviceAccountName: ingress-controller
      containers:
      - name: haproxy-ingress
        image: quay.io/jcmoraisjr/haproxy-ingress
        args:
        - --default-backend-service=$(POD_NAMESPACE)/ingress-default-backend-service
        - --default-ssl-certificate=$(POD_NAMESPACE)/tls-secret
        - --configmap=$(POD_NAMESPACE)/haproxy-ingress
        ports:
        - name: http
          containerPort: 80
        - name: https
          containerPort: 443
        - name: stat
          containerPort: 1936
        env:
          - name: POD_NAME
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
          - name: POD_NAMESPACE
            value: ingress
---
  apiVersion: v1
  kind: Service
  metadata:
    name: haproxy-ingress-service
    namespace: ingress
    labels:
      run: haproxy-ingress
  spec:
    ports:
    - port: 80
      protocol: TCP
    type: NodePort
    selector:
      run: haproxy-ingress

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: ingress-controller
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - "extensions"
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - events
    verbs:
      - create
      - patch
  - apiGroups:
      - "extensions"
    resources:
      - ingresses/status
    verbs:
      - update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  name: ingress-controller
  namespace: ingress
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - pods
      - secrets
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - get
      - update
  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ""
    resources:
      - endpoints
    verbs:
      - get
      - create
      - update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: ingress-controller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ingress-controller
subjects:
  - kind: ServiceAccount
    name: ingress-controller
    namespace: ingress
  - apiGroup: rbac.authorization.k8s.io
    kind: User
    name: ingress-controller
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: ingress-controller
  namespace: ingress
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: ingress-controller
subjects:
  - kind: ServiceAccount
    name: ingress-controller
    namespace: ingress
  - apiGroup: rbac.authorization.k8s.io
    kind: User
    name: ingress-controller

I have a service:

---
apiVersion: apps/v1beta2
kind: Deployment 
metadata:
  name: nginx-deployment 
  namespace: ingress
  labels:
    app: nginx 
spec: 
  replicas: 1 
  selector: 
    matchLabels:
      app: nginx
  template: 
    metadata:
      labels:
        app: nginx
      namespace: ingress
    spec: 
      containers:
        - name: nginx 
          image: nginx:latest 
          ports:
            - containerPort: 80 
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
  namespace: ingress
  labels:
    app: nginx
spec:
  ports:
  - port: 80
    protocol: TCP
  selector:
    app: nginx

And the ingress rule is as follows:

---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: nginx-ingress
  namespace: ingress
  annotations:
    ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
    - http:
        paths:
          - path: /app
            backend:
              serviceName: nginx-service
              servicePort: 80

This always directs to the default service. If I use a path of / instead of /app for the ingress-rule, then it works.

jcmoraisjr commented 6 years ago

This is a v0.5 feature which wasn't tagged as latest yet, please use :canary tag instead.

styk-tv commented 6 years ago

@jcmoraisjr I was going to confirm it works fine, however pulling :canary into 1.8.6 is crashing all my nodes. same exactly process with latest works fine but obviously cannot enjoy rewrite-target

Name:                      haproxy-ingress-5d8ccc9c78-srxms
Namespace:                 default
Node:                      ip-172-32-52-27.ec2.internal/172.32.52.27
Start Time:                Wed, 28 Feb 2018 13:26:20 +0000
Labels:                    pod-template-hash=1847775734
                           run=haproxy-ingress
Annotations:               kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"haproxy-ingress-5d8ccc9c78","uid":"f6ed2c21-1c8a-11e8-9e18-0a31c...
                           kubernetes.io/limit-ranger=LimitRanger plugin set: cpu request for container haproxy-ingress
Status:                    Terminating (expires Wed, 28 Feb 2018 13:32:00 +0000)
Termination Grace Period:  30s
Reason:                    NodeLost
Message:                   Node ip-172-32-52-27.ec2.internal which was running pod haproxy-ingress-5d8ccc9c78-srxms is unresponsive
IP:
Controlled By:             ReplicaSet/haproxy-ingress-5d8ccc9c78
Containers:
  haproxy-ingress:
    Container ID:
    Image:         quay.io/jcmoraisjr/haproxy-ingress:canary
    Image ID:
    Ports:         80/TCP, 443/TCP, 1936/TCP
    Args:
      --default-backend-service=default/ingress-default-backend
      --default-ssl-certificate=default/tls-secret
      --configmap=$(POD_NAMESPACE)/haproxy-configmap
      --reload-strategy=native
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Requests:
      cpu:  100m
    Environment:
      POD_NAME:       haproxy-ingress-5d8ccc9c78-srxms (v1:metadata.name)
      POD_NAMESPACE:  default (v1:metadata.namespace)
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-kckkh (ro)
Conditions:
  Type           Status
  Initialized    True
  Ready          False
  PodScheduled   True
Volumes:
  default-token-kckkh:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-kckkh
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.alpha.kubernetes.io/notReady:NoExecute for 300s
                 node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason                  Age   From                                   Message
  ----    ------                  ----  ----                                   -------
  Normal  Scheduled               16m   default-scheduler                      Successfully assigned haproxy-ingress-5d8ccc9c78-srxms to ip-172-32-52-27.ec2.internal
  Normal  SuccessfulMountVolume   16m   kubelet, ip-172-32-52-27.ec2.internal  MountVolume.SetUp succeeded for volume "default-token-kckkh"
  Normal  Pulling                 16m   kubelet, ip-172-32-52-27.ec2.internal  pulling image "quay.io/jcmoraisjr/haproxy-ingress:canary"
  Normal  NodeControllerEviction  10m   controllermanager                      Marking for deletion Pod haproxy-ingress-5d8ccc9c78-srxms from Node ip-172-32-52-27.ec2.internal
styk-tv commented 6 years ago

Well believe it or not :canary crashed my setup 4 times (rebuilt from zero). but tag of :v0.5-beta.1 works fine and i can confirm rewrite-target working. Not sure what's so special about these containers, its probably identical with exception of label. However one works and one doesn't in my 1.8.6 setup.

screen shot 2018-02-28 at 15 07 10
jcmoraisjr commented 6 years ago

Hmm... perhaps you have a cached old canary version? They share the exactly same blob as you can see below:

core@docker ~ $ docker pull quay.io/jcmoraisjr/haproxy-ingress:v0.5-beta.1
v0.5-beta.1: Pulling from jcmoraisjr/haproxy-ingress
128191993b8a: Pull complete
...
Digest: sha256:edba2935ce2f9b1df1c71386ac2ee3e309e608a64262818025693056ea9d150f
Status: Downloaded newer image for quay.io/jcmoraisjr/haproxy-ingress:v0.5-beta.1

core@docker ~ $ docker pull quay.io/jcmoraisjr/haproxy-ingress:canary
canary: Pulling from jcmoraisjr/haproxy-ingress
128191993b8a: Already exists
...
Digest: sha256:77f040655321ae30060e669efba9e3696d6462431b94fb8b86b5228698f913fd
Status: Downloaded newer image for quay.io/jcmoraisjr/haproxy-ingress:canary

core@docker ~ $ docker images|grep haproxy-ingress
quay.io/jcmoraisjr/haproxy-ingress   canary              52b1b5064c57        2 months ago        54.6MB
quay.io/jcmoraisjr/haproxy-ingress   v0.5-beta.1         52b1b5064c57        2 months ago        54.6MB
core@docker ~ $
styk-tv commented 6 years ago

It was a default kops 1.8.6 install rebuilt each time with image pulled directly into k8s from quay using standard example from here https://www.haproxy.com/blog/haproxy_ingress_controller_for_kubernetes/ with only change being in haproxy-ingress-deployment.yaml file at line 19 + ":canary" crashes every time but + ":v0.5-beta.1" works fine.

Strange are the mysteries of CS. I'm sure there are couple of AI bots somewhere having a laugh at this. Thanks for reply.

ausmith commented 6 years ago

I have pulled v0.5-beta.2 and confirmed this feature is working. In the process of doing that confirmation, I was able to put together a mini-testing suite for validating future ingress versions quickly. For now, that suite will be internal to our use cases, but in the future I intend to contribute it back to this project.

jcmoraisjr commented 6 years ago

Nice thanks! This would be very handy.