Closed markofranjic closed 2 years ago
@markofranjic: This issue is currently awaiting triage.
If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted
label and provide further guidance.
The triage/accepted
label can be added by org members by writing /triage accepted
in a comment.
/remove-kind bug /kind support This example works https://kubernetes.github.io/ingress-nginx/examples/grpc/ so its has been documented.
Does this example work for you or does this example also fails for you ?
/remove-kind bug /kind support This example works https://kubernetes.github.io/ingress-nginx/examples/grpc/ so its has been documented.
Does this example work for you or does this example also fails for you ?
I follow this example, but I didn't get any results.
ok. Please post information, that is related to this, from the current state of the cluster. For example ;
- kubectl -n <controllernamespace> get all -o wide
- kubectl -n <controllernamespace> describe po <controllerpodname>
- kubectl -n <controllernamespace> describe svc <controllersvcname>
- kubectl get ing -A -o wide
- kubectl -n <appnamespace> get po,svc,ing -o wide
- kubectl -n <appnamespace> describe po <grpcexamplepodname>
- kubectl -n <appnamespace> describe svc <grpcexamplesvcname>
- kubectl -n <appnamespace> describe ing <grpcexampleingressname>
- Your complete and exact curl command
- Logs of the controllerpod related to your curl command
- Any other information
ok. Please post information, that is related to this, from the current state of the cluster. For example ;
- kubectl -n <controllernamespace> get all -o wide - kubectl -n <controllernamespace> describe po <controllerpodname> - kubectl -n <controllernamespace> describe svc <controllersvcname> - kubectl get ing -A -o wide - kubectl -n <appnamespace> get po,svc,ing -o wide - kubectl -n <appnamespace> describe po <grpcexamplepodname> - kubectl -n <appnamespace> describe svc <grpcexamplesvcname> - kubectl -n <appnamespace> describe ing <grpcexampleingressname> - Your complete and exact curl command - Logs of the controllerpod related to your curl command - Any other information
kubectl -n
get all -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod/ingress-nginx-admission-create-ms4pd 0/1 Completed 0 4d5h 10.0.8.15 aks-agentpool-37159405-vmss000003 <none> <none> pod/ingress-nginx-admission-patch-sdwfd 0/1 Completed 0 4d5h 10.0.8.196 aks-agentpool-37159405-vmss000004 <none> <none> pod/ingress-nginx-controller-5c8d66c76d-tnncr 1/1 Running 0 7h17m 10.0.8.6 aks-agentpool-37159405-vmss000003 <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/ingress-nginx-controller LoadBalancer 10.1.24.6 20.86.213.122 80:30766/TCP,443:32639/TCP 4d5h app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
service/ingress-nginx-controller-admission ClusterIP 10.1.40.123
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR deployment.apps/ingress-nginx-controller 1/1 1 1 4d5h controller k8s.gcr.io/ingress-nginx/controller:v1.0.4@sha256:545cff00370f28363dad31e3b59a94ba377854d3a11f18988f5f9e56841ef9ef app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR replicaset.apps/ingress-nginx-controller-5c8d66c76d 1 1 1 4d5h controller k8s.gcr.io/ingress-nginx/controller:v1.0.4@sha256:545cff00370f28363dad31e3b59a94ba377854d3a11f18988f5f9e56841ef9ef app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=5c8d66c76d
NAME COMPLETIONS DURATION AGE CONTAINERS IMAGES SELECTOR job.batch/ingress-nginx-admission-create 1/1 2s 4d5h create k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1@sha256:64d8c73dca984af206adf9d6d7e46aa550362b1d7a01f3a0a91b20cc67868660 controller-uid=717dbc4d-1bc8-4c0c-82c3-1815e0db8b5c job.batch/ingress-nginx-admission-patch 1/1 2s 4d5h patch k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1@sha256:64d8c73dca984af206adf9d6d7e46aa550362b1d7a01f3a0a91b20cc67868660 controller-uid=fd156b58-dac5-45e2-915b-239f64083509
### kubectl -n <controllernamespace> describe po <controllerpodname>
Name: ingress-nginx-controller-5c8d66c76d-tnncr
Namespace: ingress-nginx
Priority: 0
Node: aks-agentpool-37159405-vmss000003/10.0.8.5
Start Time: Tue, 02 Nov 2021 09:22:25 +0100
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=ingress-nginx
app.kubernetes.io/name=ingress-nginx
pod-template-hash=5c8d66c76d
Annotations:
Normal RELOAD 12m (x55 over 7h21m) nginx-ingress-controller NGINX reload triggered due to a change in configuration
### kubectl -n <controllernamespace> describe svc <controllersvcname>
Name: ingress-nginx-controller
Namespace: ingress-nginx
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=ingress-nginx
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/version=1.0.4
helm.sh/chart=ingress-nginx-4.0.6
Annotations:
### get ing -A -o wide
NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE
default ingress-grpc
### k get svc app-service-svc -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
app-service-svc ClusterIP 10.1.236.151
### λ k describe svc app-service-svc
Name: app-service-svc
Namespace: default
Labels:
### NGINX Logs
95.168.x.x - - [02/Nov/2021:15:33:07 +0000] "PRI HTTP/2.0" 400 150 "-" "-" 0 0.060 [] [] - - - - a7c50eac27e9115ba7397652bdfec278 95.168.x.x- - [02/Nov/2021:15:33:15 +0000] "PRI HTTP/2.0" 400 150 "-" "-" 0 0.082 [] [] - - - - 041af60038ed0563ba0cf80451b68b4c 95.168.x.x - - [02/Nov/2021:15:33:15 +0000] "PRI HTTP/2.0" 400 150 "-" "-" 0 0.095 [] [] - - - - 909cc5f9e264125dd4ce85bdfcc10d0b 95.168.x.x- - [02/Nov/2021:15:33:16 +0000] "PRI HTTP/2.0" 400 150 "-" "-" 0 0.082 [] [] - - - - a41237d3651d9c9a9b56d6d895eae4ae 95.168.x.x- - [02/Nov/2021:15:51:02 +0000] "PRI HTTP/2.0" 400 150 "-" "-" 0 0.063 [] [] - - - - 434659879f040cbd51e97dff02352a9f 95.168..x.x- - [02/Nov/2021:15:51:03 +0000] "PRI HTTP/2.0" 400 150 "-" "-" 0 0.092 [] [] - - - - ffd3652c8ec6f19298f8fe122587b219
Info provided is missing some critical parts of the info requested. like describe output of ingress, your curl command and such details.
I specifically asked you to show this info for the gRPC example from the documentation so if you can delete what you posted and run the example in the documentation, it will help
Info provided is missing some critical parts of the info requested. like describe output of ingress, your curl command and such details.
I specifically asked you to show this info for the gRPC example from the documentation so if you can delete what you posted and run the example in the documentation, it will help
Namespace: default
Priority: 0
Node: aks-agentpool-37159405-vmss000004/10.0.8.116
Start Time: Thu, 28 Oct 2021 13:20:29 +0200
Labels: app=app-service
pod-template-hash=64f6c6c7d7
Annotations: kubectl.kubernetes.io/restartedAt: 2021-10-28T12:26:46+02:00
Status: Running
IP: 10.0.8.149
IPs:
IP: 10.0.8.149
Controlled By: ReplicaSet/app-service-deploy-64f6c6c7d7
Containers:
app-service-pod:
Container ID: containerd://8b7e54a2d5c177c4e0cf954d566e30b76fb2a35766b391c711c7e58f681526bf
Image: p3development.azurecr.io/app-service:dev
Image ID: p3development.azurecr.io/app-service@sha256:e83d0a6358f8d15f6b365414b11a0e798e75065055ae2b0367a97ec1c5be21d4
Ports: 3000/TCP, 50051/TCP
Host Ports: 0/TCP, 0/TCP
State: Running
Started: Thu, 28 Oct 2021 13:20:38 +0200
Ready: True
Restart Count: 0
Liveness: http-get http://:rest/healthy delay=30s timeout=5s period=20s #success=1 #failure=3
Environment:
NODE_ENV: development
RIDE_GRPC: ride-svc:443
CUSTOMER_GRPC: customer-service-svc:50051
APPINSIGHTS_INSTRUMENTATIONKEY: <set to the key 'AppInsightsInstrumentationKey' in secret 'p3-secrets'> Optional: false
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6m4gw (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-6m4gw:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>```
### λ kubectl -n default describe svc app-service-svc
```Name: app-service-svc
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=app-service
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.1.236.151
IPs: 10.1.236.151
Port: rest 443/TCP
TargetPort: 3000/TCP
Endpoints: 10.0.8.149:3000,10.0.8.59:3000
Port: grpc 80/TCP
TargetPort: 50051/TCP
Endpoints: 10.0.8.149:50051,10.0.8.59:50051
Session Affinity: None
Events: <none>```
### λ k describe ingress ingress-grpc
```Name: ingress-grpc
Namespace: default
Address: 20.86.213.122
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
TLS:
grpc.example.com terminates grpc.example.com
Rules:
Host Path Backends
---- ---- --------
grpc.example.com
/p3.protos.appservices.v1.AppService app-service-svc:grpc (10.0.8.149:50051,10.0.8.59:50051)
Annotations: cert-manager.io/cluster-issuer: letsencrypt-prod
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: true
nginx.ingress.kubernetes.io/backend-protocol: GRPC
nginx.ingress.kubernetes.io/force-ssl-redirect: true
nginx.ingress.kubernetes.io/ssl-redirect: true
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 14s (x60 over 9h) nginx-ingress-controller Scheduled for sync```
![1](https://user-images.githubusercontent.com/15909773/139917565-ba1a4a0e-b2bd-432d-9d76-c1ed76f38786.png)
![2](https://user-images.githubusercontent.com/15909773/139917587-d47b35a0-527a-419f-b41c-563382195b0e.JPG)
Could anybody make it clearer for me, should pure grpc (without reflection api etc) work over nginx-ingress-controller or only grpc-web is working? Thanks!
I think need to test non reflection api grpc on non kubernetes ingress, with just nginx as reverse proxy. If it works with non ingress and non kubernetes (vanilla nginx reverse proxy), then we can configure ingress controller potentially. @theunrealgeek for comments.
Thanks, ; Long
On Fri, 19 Nov, 2021, 1:37 PM Yury Kustov, @.***> wrote:
Could anybody make it clearer for me, should pure grpc (without reflection api etc) work over nginx-ingress-controller or only grpc-web is working? Thanks!
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/kubernetes/ingress-nginx/issues/7872#issuecomment-973847822, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABGZVWWG2UNAMHKTMWRTYHTUMYAUBANCNFSM5HGSH42A . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
I will also take a look
Could anybody make it clearer for me, should pure grpc (without reflection api etc) work over nginx-ingress-controller or only grpc-web is working? Thanks!
ingress-nginx is just nginx under the hood and it does natively support proxying gRPC, since gRPC uses HTTP/2 as its transport protocol. I don’t believe any special configuration is needed apart from the right config within nginx to handle gRPC correctly.
@markofranjic My main suspicion is with the Path:
filter in your ingress which if it doesn’t match will land in a different part of nginx’s config and might result in the type of error you are seeing. Have you tried setting it to plain /
like in the docs example if that makes any difference ?
Also just to confirm, the client is connecting to grpc.example.com
which somehow points to the exposed ingress right ?
If possible, would you be able to get a short packet capture on the client while interacting with the ingress ? Since it is TLS traffic you won’t see much, but the main thing to confirm is that the ClientHello
packets contain the TLS SNI (server_name) extension with the right host name as configured in the ingress object.
Just curious. The traffic from client outside he cluster to the LB+Ingress-Controller is not HTTP2 right ?
On Thu, Nov 25, 2021 at 10:22 AM Aditya Kamath @.***> wrote:
@markofranjic https://github.com/markofranjic My main suspicion is with the Path: filter in your ingress which if it doesn’t match will land in a different part of nginx’s config and might result in the type of error you are seeing. Have you tried setting it to plain / like in the docs example if that makes any difference ?
Also just to confirm, the client is connecting to grpc.example.com which somehow points to the exposed ingress right ? If possible, would you be able to get a short packet capture on the client while interacting with the ingress ? Since it is TLS traffic you won’t see much, but the main thing to confirm is that the ClientHello packets contain the TLS SNI (server_name) extension with the right host name as configured in the ingress object.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/kubernetes/ingress-nginx/issues/7872#issuecomment-978834382, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABGZVWXOJELCSYA2TNQM7ODUNW6I5ANCNFSM5HGSH42A . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
-- ; Long Wu Yuan
So I tried setting up ingress-nginx with an ingress object like mentioned in this issue. I was using this helloworld example and by default the client in this case doesn’t have TLS setup by default and I got logs pretty similar to the OP’s description:
10.244.0.18 - - [26/Nov/2021:04:05:18 +0000] "_" "_" "PRI * HTTP/2.0" 400 150 "-" "-" 0 0.000 [] [] - - - - ad71f4df1ca400bedc698c8447df64d4
10.244.0.18 - - [26/Nov/2021:04:05:24 +0000] "_" "_" "PRI * HTTP/2.0" 400 150 "-" "-" 0 0.000 [] [] - - - - 3c2a82b060c98ef08d7870943f6ab763
10.244.0.18 - - [26/Nov/2021:04:05:25 +0000] "_" "_" "PRI * HTTP/2.0" 400 150 "-" "-" 0 0.000 [] [] - - - - 56bb0500786bdfda1d48d1058ad545ef
A little debugging in nginx revealed that TLS was the problem. So I added TLS to the grpc client like so:
// Had to fork this function from grpc/credentials since I needed to set the “InsecureSkipVerify” flag for TLS
func NewClientTLSFromFile(certFile, serverNameOverride string) (credentials.TransportCredentials, error) {
b, err := ioutil.ReadFile(certFile)
if err != nil {
return nil, err
}
cp := x509.NewCertPool()
if !cp.AppendCertsFromPEM(b) {
return nil, fmt.Errorf("credentials: failed to append certificates")
}
return credentials.NewTLS(&tls.Config{ServerName: serverNameOverride, RootCAs: cp, InsecureSkipVerify: true}), nil
}
func main() {
flag.Parse()
// Set up a connection to the server.
creds, err := NewClientTLSFromFile("/etc/ssl/certs/ca-certificates.crt", "grpc.example.com")
if err != nil {
log.Fatalf("Unable to create TLS creds")
}
conn, err := grpc.Dial(*addr, grpc.WithTransportCredentials(creds))
if err != nil {
log.Fatalf("did not connect: %v", err)
}
And now the log looks like
10.244.0.25 - - [26/Nov/2021:05:11:55 +0000] "POST /helloworld.Greeter/SayHello HTTP/2.0" 503 190 "-" "grpc-go/1.41.0" 93 0.000 [test-ingress-service-port-50051-grpc] [] - - - - c7b7e33cf236c4c21f0b4400d69c0526
10.244.0.25 - - [26/Nov/2021:05:12:10 +0000] "POST /helloworld.Greeter/SayHello HTTP/2.0" 503 190 "-" "grpc-go/1.41.0" 93 0.000 [test-ingress-service-port-50051-grpc] [] - - - - 082366fbd2c4809da7148e5330c90ad3
which indicates a proper decode on ingress. The returned status is still 503 since the plumbing to the upstream server wasn’t correct in my case, but thats a different problem to solve.
So I tried setting up ingress-nginx with an ingress object like mentioned in this issue. I was using this helloworld example and by default the client in this case doesn’t have TLS setup by default and I got logs pretty similar to the OP’s description:
10.244.0.18 - - [26/Nov/2021:04:05:18 +0000] "_" "_" "PRI * HTTP/2.0" 400 150 "-" "-" 0 0.000 [] [] - - - - ad71f4df1ca400bedc698c8447df64d4 10.244.0.18 - - [26/Nov/2021:04:05:24 +0000] "_" "_" "PRI * HTTP/2.0" 400 150 "-" "-" 0 0.000 [] [] - - - - 3c2a82b060c98ef08d7870943f6ab763 10.244.0.18 - - [26/Nov/2021:04:05:25 +0000] "_" "_" "PRI * HTTP/2.0" 400 150 "-" "-" 0 0.000 [] [] - - - - 56bb0500786bdfda1d48d1058ad545ef
A little debugging in nginx revealed that TLS was the problem. So I added TLS to the grpc client like so:
// Had to fork this function from grpc/credentials since I needed to set the “InsecureSkipVerify” flag for TLS func NewClientTLSFromFile(certFile, serverNameOverride string) (credentials.TransportCredentials, error) { b, err := ioutil.ReadFile(certFile) if err != nil { return nil, err } cp := x509.NewCertPool() if !cp.AppendCertsFromPEM(b) { return nil, fmt.Errorf("credentials: failed to append certificates") } return credentials.NewTLS(&tls.Config{ServerName: serverNameOverride, RootCAs: cp, InsecureSkipVerify: true}), nil } func main() { flag.Parse() // Set up a connection to the server. creds, err := NewClientTLSFromFile("/etc/ssl/certs/ca-certificates.crt", "grpc.example.com") if err != nil { log.Fatalf("Unable to create TLS creds") } conn, err := grpc.Dial(*addr, grpc.WithTransportCredentials(creds)) if err != nil { log.Fatalf("did not connect: %v", err) }
And now the log looks like
10.244.0.25 - - [26/Nov/2021:05:11:55 +0000] "POST /helloworld.Greeter/SayHello HTTP/2.0" 503 190 "-" "grpc-go/1.41.0" 93 0.000 [test-ingress-service-port-50051-grpc] [] - - - - c7b7e33cf236c4c21f0b4400d69c0526 10.244.0.25 - - [26/Nov/2021:05:12:10 +0000] "POST /helloworld.Greeter/SayHello HTTP/2.0" 503 190 "-" "grpc-go/1.41.0" 93 0.000 [test-ingress-service-port-50051-grpc] [] - - - - 082366fbd2c4809da7148e5330c90ad3
which indicates a proper decode on ingress. The returned status is still 503 since the plumbing to the upstream server wasn’t correct in my case, but thats a different problem to solve.
Hi, If I understand I need to change the configuration in my C# code?
@markofranjic
Hi, If I understand I need to change the configuration in my C# code?
Yes, most likely your code right now is not setting up TLS for the gRPC client, so that is what will need to be changed. Ingress->gRPC server will remain plaintext in this case, so that doesn’t need to change.
@markofranjic
Hi, If I understand I need to change the configuration in my C# code?
Yes, most likely your code right now is not setting up TLS for the gRPC client, so that is what will need to be changed. Ingress->gRPC server will remain plaintext in this case, so that doesn’t need to change.
Hi,
I created GRPC server with this tutorial https://grpc.io/docs/languages/csharp/quickstart/
I use Insecure Channel. So all traffic I terminate with TLS over NGINX and doesn't work. When I tried to use over local IP everything works as excepted.
I think you need something like https://grpc.io/docs/guides/auth/#with-server-authentication-ssltls-3 I'm not too familiar with C# so I haven't tried it out to say if that's all is needed
I think you need something like https://grpc.io/docs/guides/auth/#with-server-authentication-ssltls-3 I'm not too familiar with C# so I haven't tried it out to say if that's all is needed
So that's the problem. Because we have 20 microservices in our K8S. So I need to update all of them every time when a certificate expires. So I don't understand why SSL Termination on Ingress is not enough?
The certificate you provide in the client is the root CA certificates that your client will trust. They generally have very long expiration times also. These are needed for any client initiating a TLS connection to some server and usually they use the default that is configured on your OS.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
I have a similar issue where i just see this in the logs:
35.153.65.211 - - [17/Aug/2022:16:26:49 +0000] "PRI * HTTP/2.0" 400 150 "-" "-" 0 0.001 [] [] - - - - 1a706808c86a6c9c50e454459411b85
Did anyone find a solution?
Same issue also
"PRI * HTTP/2.0" 400 150 "-" "-" 0 0.012 [] [] - - - - b766746fc3acd8d47ea100b21732c81b
*191880 upstream rejected request with error 1 while reading response header from upstream, client: 172.16.1.7, server: foo.com, request: "GET / HTTP/2.0", upstream: "grpc://172.16.2.241:19530", host: "foo.com"
@akashmantry were you able to find a solution ?
Yes @SalahBellagnaoui. It was a stupid thing. use-http2 was set to false in the configmap. Changing the value to true resolved it for me.
@akashmantry I am having hard time getting this one to work. I thought by default the nginx ingress supports http2 by default. I do not know how to use configmap as I am new to this. Would you please post your ingress yaml? I would greatly appreciate your help in this.
In my case exposing the service of my grpc app as type loadbalancer solved the problem for me, i was trying to expose it in the beginning with the ingress-nginx-controller lb.
And yes @ravilanda by default http2 is enabled
@akashmantry wow! thank you so much for your quick response. I was able to achieve that (by exposing the service as loadbalncer). But I am trying to use the nginx ingress. Also, what I noticed was that when I exposed as a node balancer, I did not see the load being balanced properly. The client requests were always going to the same pod! Did you notice this?
@markofranjic this is what worked for me https://github.com/fullstorydev/grpcurl/issues/347#issuecomment-1879965122
NGINX Version:
nginx -v nginx/1.19.9
NGINX Installation:https://kubernetes.github.io/ingress-nginx/deploy/#azure
Platform:Azure AKS
Problem Description
Hi Everyone, I tried to configure gRPC for multiple services in one ingress configuration. Locally it works perfect, but when I expose my services over ingress then I got the exception
This is my ingress:
kubectl describe ingress ingress-grpc
And when I generate traffic with imported protos in BloomRPC you can see logs from NGINX something like this.
My protos:
p3.protos.appservices.v1.AppService.LocationStream
Regards
/kind bug