Open dwoldemariam1 opened 5 years ago
Tested with:
---
apiVersion: v1
kind: Secret
type: kubernetes.io/tls
metadata:
name: default-cert
namespace: emojivoto
data:
tls.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNxRENDQVpBQ0NRQzAyVWw2RGJCTEx6QU5CZ2txaGtpRzl3MEJBUXNGQURBV01SUXdFZ1lEVlFRRERBdGwKZUdGdGNHeGxMbU52YlRBZUZ3MHhPVEEyTVRReE56RTRNalphRncweU1EQTJNVE14TnpFNE1qWmFNQll4RkRBUwpCZ05WQkFNTUMyVjRZVzF3YkdVdVkyOXRNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDCkFRRUFwQitxaWF0aE45T1V2YjZxUDhDRk5IRjhYU2V6WmV4SzJrTzNnNEU4UnhoKzhmaGpYL3JmK0NqVmJVVUoKM2psWDN0ZGMxc1hQZ3NnN3ViNVFTNW1QQ3UzY2dieDZiQnpjQnRyd3U0Y0luRlllRmtGaGR3UllmNE5mNzZhSApXMnpkcnZna0tRSVB0cWUzVWlWcGkyeFJtMzRDSWRHMWR0ZVBWbWt4Sk4wbUgxVmlVaXlJcEFTK29FSWNyZGxFCjJ1NitMbWkwTEMxMzA4VzcvR21KdGEzYTFRM2F5eCtDSm5VUFU3Vm9RcHU4V3hyQU02ZllzUGJKYUh4Z0pOci8KdzlWR1VWTHpUMzJxRUJCTGM0RjVubHlCb0RYUVRPYTE4czh0OU1Ob1Bvd1pVZjk3NEhPNVVjU1FodW9TamxLVgpNUjlLemIrNDdla2JCN0lPYU5ZSFBkTGFKUUlEQVFBQk1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQnQ1Q1ZTCmZKNU5KL2hNOExtSCt0dWN5RW1ObmpMSk5IaFBaOVFzczlVNDh2Zkg1cEo1MjRRSjU2RFRyVWh6TWFtaDJLdTkKS0QrUkxuVzZ4SksralBsMFBZRkVBdlhvMmw1ZnpLVDhWbmVwSXBRdXJmRHdTWmpaZW81M0hLMitzRzdjUENldQphK0FPVForNHNNclQ4TWk0SGgrRnk4ZWxCZ3hCZ3BVa1Bnck1sMVFLbGRrWldyOWdnbzk0alExTFl1Z0tNaUduClBSZDBxQW9CK2tMMm9LSGZ2aDRVb05jeUtoR3FkZUczK1FpQlFheGNkK3N4QzFKSjdLRUE5Rll2U1YvMUQxTDQKUnVqaXZXRE1uK3RBd2YyZDgyVVNqZ2tOeDk0RmJ0T1VWaDNtQXU5U1FZU3p2WkpxTGZRYXl4RHpjYmxvU0xnMgpkQy9CY1g0UFpHbm5wcE1vCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
tls.key: LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JSUV2UUlCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQktjd2dnU2pBZ0VBQW9JQkFRQ2tINnFKcTJFMzA1UzkKdnFvL3dJVTBjWHhkSjdObDdFcmFRN2VEZ1R4SEdIN3grR05mK3QvNEtOVnRSUW5lT1ZmZTExeld4YytDeUR1NQp2bEJMbVk4SzdkeUJ2SHBzSE53RzJ2Qzdod2ljVmg0V1FXRjNCRmgvZzEvdnBvZGJiTjJ1K0NRcEFnKzJwN2RTCkpXbUxiRkdiZmdJaDBiVjIxNDlXYVRFazNTWWZWV0pTTElpa0JMNmdRaHl0MlVUYTdyNHVhTFFzTFhmVHhidjgKYVltMXJkclZEZHJMSDRJbWRROVR0V2hDbTd4YkdzQXpwOWl3OXNsb2ZHQWsydi9EMVVaUlV2TlBmYW9RRUV0egpnWG1lWElHZ05kQk01clh5enkzMHcyZytqQmxSLzN2Z2M3bFJ4SkNHNmhLT1VwVXhIMHJOdjdqdDZSc0hzZzVvCjFnYzkwdG9sQWdNQkFBRUNnZ0VBQkFUMmd6cWNQZWJEbk1YL20ySVdvUXNxZFltVVhpbWtSNllpNTJpUjFsZm0KTy96T1NqcDFvN2swU09ISTlSVklicCt0bVdEc3pSSWtURTg1M2pBYmpiUDNrNEhQS2Jpbk5zL0QxNFBlRlI0Uwp6STY3V1ZQVTZ3S0hwZkhaSE1jVXdzVTI5WDRrYmwrN0lKcmo4OU1xU0htVWljbDkvVFFZUVpCLzhKd3Q0OVNFCmJ2d2YxOUw5TTVJbWdqS2ltdllUdi9HbkJVajBlMXpuM094MitxZ0pFV0RBMzhJTUZ1VE1qY29SRWVwbE5ndEIKcXVHRUg3cHN1TS8rT1hiUFUrUkZzcm1PZzkxelJUdDJ5ZWgrK0t2bkpZK0JaWUVNT1hRY3pqbHNaVDd0WkVEdgpNZlV5UnZGMGI3Z1hTY2hZRFNCb3RmNktmZXQzR05iVXhEbXNiSFpYSFFLQmdRRFlSeXFIZkR0TTQyZUhvQVNqCjlGZnp6RzBBNlBlcUZEMVgvQ080QjNkVEpCeDRBamFqOEhLU2dQYytISW9Vb0lQUVllSlZuaEVCSzA2UXc1MTkKclZjSGJJSjY1WUJma0NvS3ZGa050MWFmaWhaUDU3amdJS0gwVUhpOGk0VU9hZE9ZOWlIRFVkNzZvTmh5Szd3KwpkdW1LMzN0T0l2NWZPdzFNWnlJeWFzd0J3d0tCZ1FEQ1JGYmpCSkJXc1ZCSUhtVlQ3UVVOWHJxaEZ0OWtHNERiCnA0bWZhdzkvQXgxZVBMaG5aRWh2K0NCd3ByWm1iRE1paE56TVo5TU1kOEx3S2d3WWk2R1BtaUNBZXppYW5ybS8Kbm95YmtSSmFiNml1QXRyd0R3MXhhZUNwcmJaa2hKY295MUI4Nm1nb0lrYTlSbW9HR3FLT1F1d2luOXU4M2dwcwpXR2F3UnR6Tjl3S0JnRVBTelh0L2NmbEN0d3pKR2F0d3pNUWZyMjlCbjZrdWY2NC8yOU95UTdGRytjYUlxeW51CkZYL3NBWnp3eGp5QnVkUjNYY3NMcnJsM0kwUXlsQWo5ZXZWUkNmb1FUcG1wVkFYWjJ2TjZNeWdFM2NwaEdKRHcKcXRrN0F5SGRmdlJ1SzNVa2VxSU40cWNtR2JwMERLeHFEZ01HNGx3MmpSN0FIZ04vdERHclhCNlJBb0dCQUpqZgoySlFid2s1R2lNdklCNnNzeVI0RlhzNW54bkhyNXRKMEhDdjB2eVFQV213UFVub2lnNUtCYTEzYkE0ekVOdFZDClF0TWtIUVFodHFqeUhjU3ZGUHVCcVhRU0E3QkJtaUM5N3g1NDRqMkN3dlgwenorOFNMTG9RK0NqRC9ZNEZSQUkKTnhXbURVTVAvaVR5cFhxYU9UUEVYRGkvSGRlWjBCQUUzUUo1TVVkdkFvR0FSdjRsb1p5aGJ5NGdYTnovU0x5cApERzUvRFV0d1JkcW1JUDNpSE1mekxnYkFLWGpXVmswQ012Z3JkbWdiUUhXVTcxcFp1dTlCY200WG1aM3RhWGw5CnlFWVpOUDNpMWRHRXo4VjB2QjZhdHdMSzVtTWgzTHdZbUpZaTRpSW9XMU5kSDFIa0JDNVpBUmlKS1haWFFJWTIKcGdZZVJycHNDVzk3LzlrL0xjSWN5NXc9Ci0tLS0tRU5EIFBSSVZBVEUgS0VZLS0tLS0K
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/backend-protocol: "GRPC"
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_set_header l5d-dst-override $service_name.$namespace.svc.cluster.local:$service_port;
proxy_hide_header l5d-remote-ip;
proxy_hide_header l5d-server-id;
name: emoji
namespace: emojivoto
spec:
rules:
- host: example.com
http:
paths:
- backend:
serviceName: emoji-svc
servicePort: 8080
path: /emojivoto.v1.EmojiService
tls:
- secretName: default-cert
hosts:
- example.com
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: web-ingress
namespace: emojivoto
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/upstream-vhost: web-svc.emojivoto.svc.cluster.local:80
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_hide_header l5d-remote-ip;
proxy_hide_header l5d-server-id;
spec:
rules:
- host: example.com
http:
paths:
- backend:
serviceName: web-svc
servicePort: 80
path: /
tls:
- secretName: default-cert
hosts:
- example.com
You'll want to check emojivoto out for the proto and run:
grpcurl -authority example.com -insecure -proto Emoji.proto \
$(kubectl -n nginx get svc --no-headers | awk '{ print $4 }' | head -n1):443 \
emojivoto.v1.EmojiService/ListAll
For verification that it works for http/1 traffic, use curl
against the web service.
Here's how I'm installing nginx
:
helm fetch stable/nginx-ingress
kubectl create ns nginx
cat <<EOF > chart.yaml
controller:
# config:
# enable-opentracing: "true"
# zipkin-collector-host: oc-collector.tracing
metrics:
enabled: true
defaultBackend:
enabled: false
rbac:
create: true
podSecurityPolicy:
enabled: true
EOF
helm template nginx-ingress-*.tgz --namespace nginx --name nginx -f nginx-chart.yml | linkerd inject - | kubectl -n nginx apply -f -
@dwoldemariam1 I have reproduced the issue and have a fix for you based off of the configuration that @grampelberg shared.
In your ingress definition, you can add the nginx.ingress.kubernetes.io/upstream-vhost
annotation:
nginx.ingress.kubernetes.io/upstream-vhost: <backend-svc>.<namespace>.svc.cluster.local:<service-port>
Where <backend-svc>
, <namespace>
, and <service-port>
are the name of the service, the namespace, and service container port, respectively.
The upstream-vhost
annotation configures the nginx to set the Host header, which is used to display the edges
We'll have to look in to what it will take to ensure that this annotation is dynamically set in the same way that the proxy_set_header
value is configured in the nginx.ingress.kubernetes.io/configuration-snippet
annotation.
@cpretzer just tried with your suggestion and still dont see the edge
@dwoldemariam1 thanks for trying the suggestion.
Can you share your Ingress and Deployment configs?
@cpretzer Here are the service and ingress definitions.
kind: Service
metadata:
name: example-service
namespace: example-namespace
labels:
env: staging
category: product
spec:
type: ClusterIP
ports:
- name: grpc
port: 81
targetPort: 8081
selector:
env: staging
component: web
category: product
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: example-service
namespace: example-namespace
annotations:
external-dns.alpha.kubernetes.io/cloudflare-proxied: "false"
certmanager.k8s.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/backend-protocol: "GRPC"
kubernetes.io/ingress.class: "linkerd-nginx"
nginx.ingress.kubernetes.io/upstream-vhost: example-service.example-namespace.svc.cluster.local:grpc
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_set_header l5d-dst-override $service_name.$namespace.svc.cluster.local:$service_port;
proxy_hide_header l5d-remote-ip;
proxy_hide_header l5d-server-id;
spec:
rules:
- host: example.my-domain.com
http:
paths:
- path:
backend:
serviceName: example-service
servicePort: grpc
tls:
- hosts:
- example-service.staging.my-domain.com
secretName: example-service-secret
By Deployment configs, do you mean the yaml file for the nginx-ingress-controller deployment?
By Deployment configs, do you mean the yaml file for the nginx-ingress-controller deployment?
@dwoldemariam1 yes please 😄
@dwoldemariam1 The port is misconfigured in the configuration you sent:
nginx.ingress.kubernetes.io/upstream-vhost: example-service.example-namespace.svc.cluster.local:grpc
grpc
should be a port number
I used helm to install the nginx ingress controller. here's the yaml file from the GKE console
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "2"
creationTimestamp: 2019-09-20T16:59:04Z
generation: 7
labels:
app: nginx-ingress
chart: nginx-ingress-1.20.0
component: controller
heritage: Tiller
release: nginx-ingress
name: nginx-ingress-controller
namespace: my-namespace
resourceVersion: "207256290"
selfLink: /apis/apps/v1/namespaces/my-namespace/deployments/nginx-ingress-controller
uid: f36fb07e-dbc7-11e9-8521-42010a9601a5
spec:
progressDeadlineSeconds: 2147483647
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: nginx-ingress
component: controller
release: nginx-ingress
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: nginx-ingress
category: product
component: controller
env: staging
release: nginx-ingress
spec:
containers:
- args:
- /nginx-ingress-controller
- --default-backend-service=my-namespace/nginx-ingress-default-backend
- --publish-service=my-namespace/nginx-ingress-controller
- --election-id=ingress-controller-leader
- --ingress-class=linkerd-nginx
- --configmap=my-namespace/nginx-ingress-controller
- --watch-namespace=my-namespace
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.25.1
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: nginx-ingress-controller
ports:
- containerPort: 80
name: http
protocol: TCP
- containerPort: 443
name: https
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources: {}
securityContext:
allowPrivilegeEscalation: true
capabilities:
add:
- NET_BIND_SERVICE
drop:
- ALL
runAsUser: 33
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: linkerd-nginx
serviceAccountName: linkerd-nginx
terminationGracePeriodSeconds: 60
@dwoldemariam1 the Deployment for the ingress controller looks fine. I don't see any customizations to the helm chart.
Did you get a chance to test after changing the grpc
value of the port in the Ingress definition?
@cpretzer yeah I tested with port 81. The requests do succeed but the edge still does not appear on the list
The weird thing is I dont have the nginx.ingress.kubernetes.io/upstream-vhost
annotation for http 1.0 connections and the edges are there for those services
@cpretzer is there something I can look for in my nginx.conf
file
@dwoldemariam1, please do send the file over so that I can compare it with my config.
You may need to use the kubectl ingress-nginx
plugin to get the details of the backends
Here's the backend
{
"name": "example-namespace-example-service-grpc",
"service": {
"metadata": {
"creationTimestamp": null
},
"spec": {
"ports": [
{
"name": "grpc",
"protocol": "TCP",
"port": 81,
"targetPort": 8081
}
],
"selector": {
"category": "product",
"component": "web",
"env": "staging"
},
"clusterIP": "*******",
"type": "ClusterIP",
"sessionAffinity": "None"
},
"status": {
"loadBalancer": {}
}
},
"port": "grpc",
"secureCACert": {
"secret": "",
"caFilename": "",
"pemSha": ""
},
"sslPassthrough": false,
"endpoints": [
{
"address": "********",
"port": "8081"
},
{
"address": "*********",
"port": "8081"
}
],
"sessionAffinityConfig": {
"name": "",
"cookieSessionAffinity": {
"name": ""
}
},
"upstreamHashByConfig": {
"upstream-hash-by-subset-size": 3
},
"noServer": false,
"trafficShapingPolicy": {
"weight": 0,
"header": "",
"headerValue": "",
"cookie": ""
}
}
And this is part of the config related to that backend
## start server example.my-domain.com
server {
server_name example.my-domain.com ;
listen 80 ;
listen [::]:80 ;
listen 443 ssl http2 ;
listen [::]:443 ssl http2 ;
set $proxy_upstream_name "-";
# PEM sha: c915132fd15bb5465393950b6acb21f6742
ssl_certificate /etc/ingress-controller/ssl/default-fake-certificate.pem;
ssl_certificate_key /etc/ingress-controller/ssl/default-fake-certificate.pem;
ssl_certificate_by_lua_block {
certificate.call()
}
location / {
set $namespace "example-namespace";
set $ingress_name "example-service";
set $service_name "example-service";
set $service_port "{1 0 grpc}";
set $location_path "/";
rewrite_by_lua_block {
lua_ingress.rewrite({
force_ssl_redirect = false,
use_port_in_redirects = false,
})
balancer.rewrite()
plugins.run()
}
header_filter_by_lua_block {
plugins.run()
}
body_filter_by_lua_block {
}
log_by_lua_block {
balancer.log()
monitor.call()
plugins.run()
}
if ($scheme = https) {
more_set_headers "Strict-Transport-Security: max-age=15724800; includeSubDomains";
}
port_in_redirect off;
set $balancer_ewma_score -1;
set $proxy_upstream_name "example-namespace-example-service-grpc";
set $proxy_host $proxy_upstream_name;
set $pass_access_scheme $scheme;
set $pass_server_port $server_port;
set $best_http_host $http_host;
set $pass_port $pass_server_port;
set $proxy_alternative_upstream_name "";
client_max_body_size 1m;
# Pass the extracted client certificate to the backend
# Allow websocket connections
grpc_set_header Upgrade $http_upgrade;
grpc_set_header Connection $connection_upgrade;
grpc_set_header X-Request-ID $req_id;
grpc_set_header X-Real-IP $the_real_ip;
grpc_set_header X-Forwarded-For $the_real_ip;
grpc_set_header X-Forwarded-Host $best_http_host;
grpc_set_header X-Forwarded-Port $pass_port;
grpc_set_header X-Forwarded-Proto $pass_access_scheme;
grpc_set_header X-Original-URI $request_uri;
grpc_set_header X-Scheme $pass_access_scheme;
# Pass the original X-Forwarded-For
grpc_set_header X-Original-Forwarded-For $http_x_forwarded_for;
# mitigate HTTPoxy Vulnerability
# https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
grpc_set_header Proxy "";
# Custom headers to proxied server
proxy_connect_timeout 5s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
proxy_buffering off;
proxy_buffer_size 4k;
proxy_buffers 4 4k;
proxy_request_buffering on;
proxy_http_version 1.1;
proxy_cookie_domain off;
proxy_cookie_path off;
# In case of errors try the next upstream server before returning an error
proxy_next_upstream error timeout;
proxy_next_upstream_timeout 0;
proxy_next_upstream_tries 3;
proxy_set_header l5d-dst-override $service_name.$namespace.svc.cluster.local:$service_port;
proxy_hide_header l5d-remote-ip;
proxy_hide_header l5d-server-id;
grpc_pass grpc://upstream_balancer;
proxy_redirect off;
}
}
@dwoldemariam1 thanks for the config file.
Just tested a configuration that I'd like for you to try for your ingress config:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: example-service
namespace: example-namespace
annotations:
external-dns.alpha.kubernetes.io/cloudflare-proxied: "false"
certmanager.k8s.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/backend-protocol: "GRPC"
kubernetes.io/ingress.class: "linkerd-nginx"
nginx.ingress.kubernetes.io/upstream-vhost: example-service.example-namespace.svc.cluster.local:grpc
nginx.ingress.kubernetes.io/configuration-snippet: |
grpc_set_header l5d-dst-override $service_name.$namespace.svc.cluster.local:$service_port;
grpc_hide_header l5d-remote-ip;
grpc_hide_header l5d-server-id;
spec:
rules:
- host: example.my-domain.com
http:
paths:
- path:
backend:
serviceName: example-service
servicePort: grpc
tls:
- hosts:
- example-service.staging.my-domain.com
secretName: example-service-secret
The difference here is that I've updated the proxy_hide_header
and proxy_set_header
directives to grpc_hide_header
and grpc_set_header
directives.
proxy_set_header l5d-dst-override $service_name.$namespace.svc.cluster.local:$service_port;
proxy_hide_header l5d-remote-ip;
proxy_hide_header l5d-server-id;
...becomes...
grpc_set_header l5d-dst-override $service_name.$namespace.svc.cluster.local:$service_port;
grpc_hide_header l5d-remote-ip;
grpc_hide_header l5d-server-id;
Still no luck... Here's something weird I noticed
I removed the nginx controller pod from the mesh and looked at the requests in the dashboard. In the dashboard, I can see an arrow from unmeshed nginx controller pods to the my pods and the correct request in the path
. When I inject the nginx pods and make new requests this arrow goes away and the path just becomes the server reflection info... In both cases Im getting back expected responses to my local client. Here are screen shots
Im using grpc_cli to make requests
@dwoldemariam1 That is the expected behavior for an ingress controller which is not injected with the linkerd proxy because uninjected pods are resolved using the tap
functionality.
I have gone through my configuration again in order to reproduce the behavior. The Ingress config results in the edge
between the ingress pod and the emoji service being properly displayed in the edges
command output and the dashboard:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/backend-protocol: GRPC
nginx.ingress.kubernetes.io/configuration-snippet: |
grpc_set_header l5d-dst-override $service_name.$namespace.svc.cluster.local:$service_port;
grpc_hide_header l5d-remote-ip;
grpc_hide_header l5d-server-id;
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/upstream-vhost: emoji-svc.emojivoto.svc.cluster.local:8080
name: emoji
namespace: emojivoto
spec:
rules:
- host: example.com
http:
paths:
- backend:
serviceName: emoji-svc
servicePort: 8080
path: /emojivoto.v1.EmojiService
One thing that I noticed is that the edge did not display properly if the - host: example.com
line did not exist in the rules
section of the Ingress
Can you share your current Ingress configuration? I'd like to see the result after you changed the servicePort
value.
@dwoldemariam1 any luck with this?
I'd like to close this issue, if you've got the edges working.
@cpretzer Sorry for the late response.
Still no edges. I noticed that when I added nginx.ingress.kubernetes.io/upstream-vhost
, I lost the load balancing from the nginx pod to the services and only one pod was receiving traffic
Edit: the load balancing was an issue on my end but there is still no edge
@dwoldemariam1 can you share the current config file for the ingress resource?
@cpretzer shared with you on slack with a lot more detail and some more screenshots
@dwoldemariam1 how are things going with this? Any updates after changing the config?
I havent been able to see the edge yet. I am working on the querys you sent me over slack today
Bug Report
What is the issue?
I am using nginx ingress controller to allow external traffic into my cluster. When I run the command
linkerd edges po
, I see the edges from the nginx ingress controller pods to my internal services for all services that accept http1 traffic. However, the edges to the services using GRPC are do not appear on the list.This doesn't seem to affect the traffic flow and all GRPC services seem to be getting the requests and serving them properly but it would be nice to have the edges for better tracing.
How can it be reproduced?
Install nginx ingress controller from https://github.com/kubernetes/ingress-nginx into your mesh. Setup a GRPC service and define an ingress which uses the ingress controller you installed.
run
linkerd edges po
- the edge from the nginx ingress controller pod to your app should not appearLogs, error output, etc
(If the output is long, please create a gist and paste the link here.)
linkerd check
output