openservicemesh / osm

Open Service Mesh (OSM) is a lightweight, extensible, cloud native service mesh that allows users to uniformly manage, secure, and get out-of-the-box observability features for highly dynamic microservice environments.
https://openservicemesh.io/
Apache License 2.0
2.59k stars 277 forks source link

External Ingress into OSM / Permissive Traffic Policy Issue #4526

Closed ashdubeyaz closed 2 years ago

ashdubeyaz commented 2 years ago

The instruction provided did not yield the desired results.

trstringer commented 2 years ago

Hello and thank you for opening this issue! What instruction have you followed and what result did you observe? Please provide more information on the issue you're seeing. Thank you!

ashdubeyaz commented 2 years ago

Hello Thomas ... I am trying to enable Ingress with Nginx Ingress controller by following -

https://release-v0-11.docs.openservicemesh.io/docs/demos/ingress_k8s_nginx/

The first issues is that

https://raw.githubusercontent.com/openservicemesh/osm/main/docs/example/manifests/samples/httpbin/httpbin.yaml

results in 404. to get around it I found the httpbin manifest else where and deployed it.

Ingress looks like this -

apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: httpbin namespace: httpbin annotations: kubernetes.io/ingress.class: mydemoingress #{{ .Values.ingress.className }} spec: ingressClassName: mydemoingress rules:

Ingress Backend looks like this

kind: IngressBackend apiVersion: policy.openservicemesh.io/v1alpha1 metadata: name: httpbin-be namespace: httpbin spec: backends:

PS C:\Users\ashutdub> kubectl get pods -n httpbin NAME READY STATUS RESTARTS AGE httpbin-cdc9df978-zxfkg 2/2 Running 0 17m

PS C:\Users\ashutdub> kubectl get svc -n httpbin NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE httpbin ClusterIP 192.168.115.186 8000/TCP 113m

Both the namespaces httpbin and ns-ingress (where Nginx is deployed) are monitored by OSM. Pods were restarted after onboarding the namespaces with OSM.

Please let me know if there is anything I can supply here.

Regards.

ashdubeyaz commented 2 years ago

I am able to use port forwarding to the httpbin service like this -

kubectl port-forward service/httpbin 8000 8000 -n ns-httpbin

and able to see the response successfully

However when trying to connect to the service using the public IP http://20.72.137.xx:80 I get ERR_CONNECTION_RESET response.

shalier commented 2 years ago

@ashdubeyaz, Please follow the steps in https://release-v1-0.docs.openservicemesh.io/docs/demos/ingress_k8s_nginx/ - this is the most recent documentation and has the updated link to the httpbin manifest. The issue might be because the sidecar injection wasn't disabled for the namespace nginx is deployed in.

ashdubeyaz commented 2 years ago

Hello ... thanks for pointing out the right version of the document ... I have successfully executed the steps outlined and i was able to implement the NGINX ingress and bring the traffic from outside into the mesh for the example httpbin workload .. the 2 configurations that worked were

1 Disabling the sidecar injection in the NGINX namespace.

  1. Supplying the following annotation in the Ingress configuration

    annotations: kubernetes.io/ingress.class:

However, when I repeat the same steps for the REAL workload I still get 502 Bad Gateway error:

Here is what my configuration looks like

  1. I am running the mesh in Permissive Traffic Policy Mode. I have ran thru the demo and tested the example httpbin service. Here is the output -

PS C:\Users\ashutdub\Trellis> kubectl exec -n curl -ti "$(kubectl get pod -n curl -l app=curl -o jsonpath='{.items[0].metadata.name}')" -c curl -- curl -I http://httpbin.httpbin:14001 HTTP/1.1 200 OK server: envoy date: Fri, 11 Feb 2022 17:56:32 GMT content-type: text/html; charset=utf-8 content-length: 9593 access-control-allow-origin: * access-control-allow-credentials: true x-envoy-upstream-service-time: 12

This is what the osm-mesh-config looks like -

apiVersion: config.openservicemesh.io/v1alpha1 kind: MeshConfig metadata: creationTimestamp: '2022-02-11T15:54:39Z' generation: 2 managedFields:

  1. My REAL workload namespace (ns-boutique) is onboarded with OSM

apiVersion: v1 kind: Namespace metadata: name: ns-boutique uid: 3b2ee983-9d8e-4659-901d-4e4f7aa7c9ea resourceVersion: '310035' creationTimestamp: '2022-02-11T00:07:34Z' labels: azure-key-vault-env-injection: enabled kubernetes.io/metadata.name: ns-boutique mylabel: boutique openservicemesh.io/monitored-by: osm annotations: openservicemesh.io/sidecar-injection: enabled managedFields:

  1. The deployments in ns-boutique namespace have been rolling re-started several time. All pods show envoy sidecar injected.

  2. Ingress configuration looks like the following apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: boutiqueingress namespace: ns-boutique uid: 8795a944-2e5a-4a23-81bc-657e0fa9fcd5 resourceVersion: '286165' generation: 11 creationTimestamp: '2022-02-11T00:07:38Z' labels: app.kubernetes.io/managed-by: Helm annotations: kubectl.kubernetes.io/last-applied-configuration: > {"apiVersion":"networking.k8s.io/v1","kind":"Ingress","metadata":{"annotations":{"kubernetes.io/ingress.class":"mydemoingress"},"name":"boutiqueingress","namespace":"ns-boutique"},"spec":{"rules":[{"http":{"paths":[{"backend":{"service":{"name":"frontend","port":{"number":80}}},"path":"/","pathType":"ImplementationSpecific"}]}}]}} kubernetes.io/ingress.class: mydemoingress meta.helm.sh/release-name: boutique meta.helm.sh/release-namespace: default managedFields:

    • manager: terraform-provider-helm_v2.4.0_x5.exe operation: Update apiVersion: networking.k8s.io/v1 time: '2022-02-11T00:07:38Z' fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:kubernetes.io/ingress.class: {} f:meta.helm.sh/release-name: {} f:meta.helm.sh/release-namespace: {} f:labels: .: {} f:app.kubernetes.io/managed-by: {}
    • manager: nginx-ingress-controller operation: Update apiVersion: networking.k8s.io/v1 time: '2022-02-11T00:07:47Z' fieldsType: FieldsV1 fieldsV1: f:status: f:loadBalancer: f:ingress: {} subresource: status
    • manager: kubectl-client-side-apply operation: Update apiVersion: networking.k8s.io/v1 time: '2022-02-11T14:10:43Z' fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: f:kubectl.kubernetes.io/last-applied-configuration: {} f:spec: f:rules: {} selfLink: /apis/networking.k8s.io/v1/namespaces/ns-boutique/ingresses/boutiqueingress status: loadBalancer: ingress:
      • ip: 20.72.137.28 spec: rules:
    • http: paths:
      • path: / pathType: ImplementationSpecific backend: service: name: frontend port: number: 80
  3. IngressBackend config looks like this apiVersion: policy.openservicemesh.io/v1alpha1 kind: IngressBackend metadata: annotations: meta.helm.sh/release-name: boutique meta.helm.sh/release-namespace: default creationTimestamp: '2022-02-11T00:07:38Z' generation: 1 labels: app.kubernetes.io/managed-by: Helm managedFields:

    • apiVersion: policy.openservicemesh.io/v1alpha1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:meta.helm.sh/release-name: {} f:meta.helm.sh/release-namespace: {} f:labels: .: {} f:app.kubernetes.io/managed-by: {} f:spec: .: {} f:backends: {} f:sources: {} manager: terraform-provider-helm_v2.4.0_x5.exe operation: Update time: '2022-02-11T00:07:38Z' name: boutique-be namespace: ns-boutique resourceVersion: '82204' uid: b6d55381-c73a-4323-9dd5-f236e7070c96 selfLink: >- /apis/policy.openservicemesh.io/v1alpha1/namespaces/ns-boutique/ingressbackends/boutique-be spec: backends:
    • name: frontend port: number: 80 protocol: http sources:
    • kind: Service name: nginix-ingress-ingress-nginx-controller namespace: ns-ingress

6 NGINX service config is as follows apiVersion: v1 kind: Service metadata: name: nginix-ingress-ingress-nginx-controller namespace: ns-ingress uid: 01a482aa-5eaa-443a-b754-355793e1e752 resourceVersion: '2621' creationTimestamp: '2022-02-10T18:49:39Z' labels: app.kubernetes.io/component: controller app.kubernetes.io/instance: nginix-ingress app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx app.kubernetes.io/version: 1.1.1 helm.sh/chart: ingress-nginx-4.0.17 annotations: meta.helm.sh/release-name: nginix-ingress meta.helm.sh/release-namespace: ns-ingress finalizers:

I have tested the routing by OFFBOARDING the ns-boutique namespace and all service are working fine.

Please let me know what am I missing

Thanks for your help!

shalier commented 2 years ago

@ashdubeyaz Could you run the following and provide the output? kubectl logs -n osm-system $(kubectl get pod -n osm-system -l app=osm-controller -o jsonpath='{.items[0].metadata.name}') | grep error

Could I also get the yaml files for the following resources:

ashdubeyaz commented 2 years ago

Hi Shalier

Here are the artifacts you asked for -

  1. OSM controller error log . osm-controller-error.log

  2. other related material osm-issue.zip

Regards.

Ash.

shalier commented 2 years ago

@ashdubeyaz It looks like you're using the service frontend's port instead of the targetPort for the IngressBackend, could you change it to the targetPort?

ashdubeyaz commented 2 years ago

just to clarify ... you are asking to change the port number to 8080 in the IngressBackend configuration ... because the frontend service's target port is 8080?

ashdubeyaz commented 2 years ago

Unfortunately, the change did not have an effect. I am still getting 500 error. attached is the log from the ingress controller log.

ingress-controller-pod.log

as soon as i offboard the boutique namespace from OSM, the app starts to work fine.

shalier commented 2 years ago

@ashdubeyaz It looks like it's because Redis is TCP-based, for the redis-cart service please add appProtocol: tcp - ingressBackend should still be using the frontend's targetPort: 8080. Additional info on appProtocol

shalier commented 2 years ago

@ashdubeyaz closing this issue, please specify the appProtocol to the redis-cart service like below

apiVersion: v1
kind: Service
metadata:
  name: redis-cart
  namespace: ns-boutique
spec:
  type: ClusterIP
  selector:
    app: redis-cart
  ports:
  - name: redis
    port: 6379
    targetPort: 6379
    appProtocol: TCP

and ensure IngressBackend is using the targetPort of the frontend service:

apiVersion: policy.openservicemesh.io/v1alpha1
kind: IngressBackend
metadata:
  annotations:
    meta.helm.sh/release-name: boutique
    meta.helm.sh/release-namespace: default
  labels:
    app.kubernetes.io/managed-by: Helm
  name: boutique-be
  namespace: ns-boutique
spec:
  backends:
    - name: frontend
      port:
        number: 8080
        protocol: http
  sources:
    - kind: Service
      name: nginix-ingress-ingress-nginx-controller
      namespace: ns-ingress