Closed ashdubeyaz closed 2 years ago
Hello and thank you for opening this issue! What instruction have you followed and what result did you observe? Please provide more information on the issue you're seeing. Thank you!
Hello Thomas ... I am trying to enable Ingress with Nginx Ingress controller by following -
https://release-v0-11.docs.openservicemesh.io/docs/demos/ingress_k8s_nginx/
The first issues is that
results in 404. to get around it I found the httpbin manifest else where and deployed it.
Ingress looks like this -
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: httpbin namespace: httpbin annotations: kubernetes.io/ingress.class: mydemoingress #{{ .Values.ingress.className }} spec: ingressClassName: mydemoingress rules:
Ingress Backend looks like this
kind: IngressBackend apiVersion: policy.openservicemesh.io/v1alpha1 metadata: name: httpbin-be namespace: httpbin spec: backends:
PS C:\Users\ashutdub> kubectl get pods -n httpbin NAME READY STATUS RESTARTS AGE httpbin-cdc9df978-zxfkg 2/2 Running 0 17m
PS C:\Users\ashutdub> kubectl get svc -n httpbin
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
httpbin ClusterIP 192.168.115.186
Both the namespaces httpbin and ns-ingress (where Nginx is deployed) are monitored by OSM. Pods were restarted after onboarding the namespaces with OSM.
Please let me know if there is anything I can supply here.
Regards.
I am able to use port forwarding to the httpbin service like this -
kubectl port-forward service/httpbin 8000 8000 -n ns-httpbin
and able to see the response successfully
However when trying to connect to the service using the public IP http://20.72.137.xx:80 I get ERR_CONNECTION_RESET response.
@ashdubeyaz, Please follow the steps in https://release-v1-0.docs.openservicemesh.io/docs/demos/ingress_k8s_nginx/ - this is the most recent documentation and has the updated link to the httpbin manifest. The issue might be because the sidecar injection wasn't disabled for the namespace nginx is deployed in.
Hello ... thanks for pointing out the right version of the document ... I have successfully executed the steps outlined and i was able to implement the NGINX ingress and bring the traffic from outside into the mesh for the example httpbin workload .. the 2 configurations that worked were
1 Disabling the sidecar injection in the NGINX namespace.
Supplying the following annotation in the Ingress configuration
annotations:
kubernetes.io/ingress.class:
However, when I repeat the same steps for the REAL workload I still get 502 Bad Gateway error:
Here is what my configuration looks like
PS C:\Users\ashutdub\Trellis> kubectl exec -n curl -ti "$(kubectl get pod -n curl -l app=curl -o jsonpath='{.items[0].metadata.name}')" -c curl -- curl -I http://httpbin.httpbin:14001 HTTP/1.1 200 OK server: envoy date: Fri, 11 Feb 2022 17:56:32 GMT content-type: text/html; charset=utf-8 content-length: 9593 access-control-allow-origin: * access-control-allow-credentials: true x-envoy-upstream-service-time: 12
This is what the osm-mesh-config looks like -
apiVersion: config.openservicemesh.io/v1alpha1 kind: MeshConfig metadata: creationTimestamp: '2022-02-11T15:54:39Z' generation: 2 managedFields:
apiVersion: v1 kind: Namespace metadata: name: ns-boutique uid: 3b2ee983-9d8e-4659-901d-4e4f7aa7c9ea resourceVersion: '310035' creationTimestamp: '2022-02-11T00:07:34Z' labels: azure-key-vault-env-injection: enabled kubernetes.io/metadata.name: ns-boutique mylabel: boutique openservicemesh.io/monitored-by: osm annotations: openservicemesh.io/sidecar-injection: enabled managedFields:
The deployments in ns-boutique namespace have been rolling re-started several time. All pods show envoy sidecar injected.
Ingress configuration looks like the following apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: boutiqueingress namespace: ns-boutique uid: 8795a944-2e5a-4a23-81bc-657e0fa9fcd5 resourceVersion: '286165' generation: 11 creationTimestamp: '2022-02-11T00:07:38Z' labels: app.kubernetes.io/managed-by: Helm annotations: kubectl.kubernetes.io/last-applied-configuration: > {"apiVersion":"networking.k8s.io/v1","kind":"Ingress","metadata":{"annotations":{"kubernetes.io/ingress.class":"mydemoingress"},"name":"boutiqueingress","namespace":"ns-boutique"},"spec":{"rules":[{"http":{"paths":[{"backend":{"service":{"name":"frontend","port":{"number":80}}},"path":"/","pathType":"ImplementationSpecific"}]}}]}} kubernetes.io/ingress.class: mydemoingress meta.helm.sh/release-name: boutique meta.helm.sh/release-namespace: default managedFields:
IngressBackend config looks like this apiVersion: policy.openservicemesh.io/v1alpha1 kind: IngressBackend metadata: annotations: meta.helm.sh/release-name: boutique meta.helm.sh/release-namespace: default creationTimestamp: '2022-02-11T00:07:38Z' generation: 1 labels: app.kubernetes.io/managed-by: Helm managedFields:
6 NGINX service config is as follows apiVersion: v1 kind: Service metadata: name: nginix-ingress-ingress-nginx-controller namespace: ns-ingress uid: 01a482aa-5eaa-443a-b754-355793e1e752 resourceVersion: '2621' creationTimestamp: '2022-02-10T18:49:39Z' labels: app.kubernetes.io/component: controller app.kubernetes.io/instance: nginix-ingress app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx app.kubernetes.io/version: 1.1.1 helm.sh/chart: ingress-nginx-4.0.17 annotations: meta.helm.sh/release-name: nginix-ingress meta.helm.sh/release-namespace: ns-ingress finalizers:
I have tested the routing by OFFBOARDING the ns-boutique namespace and all service are working fine.
Please let me know what am I missing
Thanks for your help!
@ashdubeyaz Could you run the following and provide the output?
kubectl logs -n osm-system $(kubectl get pod -n osm-system -l app=osm-controller -o jsonpath='{.items[0].metadata.name}') | grep error
Could I also get the yaml files for the following resources:
Hi Shalier
Here are the artifacts you asked for -
OSM controller error log . osm-controller-error.log
other related material osm-issue.zip
Regards.
Ash.
@ashdubeyaz It looks like you're using the service frontend's port instead of the targetPort for the IngressBackend, could you change it to the targetPort?
just to clarify ... you are asking to change the port number to 8080 in the IngressBackend configuration ... because the frontend service's target port is 8080?
Unfortunately, the change did not have an effect. I am still getting 500 error. attached is the log from the ingress controller log.
as soon as i offboard the boutique namespace from OSM, the app starts to work fine.
@ashdubeyaz It looks like it's because Redis is TCP-based, for the redis-cart service please add appProtocol: tcp
- ingressBackend should still be using the frontend's targetPort: 8080. Additional info on appProtocol
@ashdubeyaz closing this issue, please specify the appProtocol to the redis-cart service like below
apiVersion: v1
kind: Service
metadata:
name: redis-cart
namespace: ns-boutique
spec:
type: ClusterIP
selector:
app: redis-cart
ports:
- name: redis
port: 6379
targetPort: 6379
appProtocol: TCP
and ensure IngressBackend is using the targetPort of the frontend service:
apiVersion: policy.openservicemesh.io/v1alpha1
kind: IngressBackend
metadata:
annotations:
meta.helm.sh/release-name: boutique
meta.helm.sh/release-namespace: default
labels:
app.kubernetes.io/managed-by: Helm
name: boutique-be
namespace: ns-boutique
spec:
backends:
- name: frontend
port:
number: 8080
protocol: http
sources:
- kind: Service
name: nginix-ingress-ingress-nginx-controller
namespace: ns-ingress
The instruction provided did not yield the desired results.