Closed juicemia closed 8 months ago
Also adding the config I ended up with:
Sidecar:
apiVersion: networking.istio.io/v1beta1
kind: Sidecar
metadata:
name: dotnet-api
spec:
egress:
- hosts:
- REDACTED
ingress:
- defaultEndpoint: 0.0.0.0:8081
port:
name: grpc
number: 8081
protocol: TLS
tls:
caCertificates: /etc/istio/tls-ca-certs/ca.crt
mode: SIMPLE
privateKey: /etc/istio/tls-certs/tls.key
serverCertificate: /etc/istio/tls-certs/tls.crt
- defaultEndpoint: 0.0.0.0:80
port:
name: http
number: 80
protocol: HTTP
outboundTrafficPolicy:
mode: REGISTRY_ONLY
workloadSelector:
labels:
app: dotnet-api
Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: dotnet-api
version: 0.1.1137
name: dotnet-api
spec:
progressDeadlineSeconds: 600
replicas: 2
revisionHistoryLimit: 10
selector:
matchLabels:
app: dotnet-api
strategy:
rollingUpdate:
maxSurge: 50%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
annotations:
cluster-autoscaler.kubernetes.io/safe-to-evict: "true"
sidecar.istio.io/userVolume: '{"tls-secret":{"secret":{"secretName":"ht-test-dotnet-api-grpc"}}}'
sidecar.istio.io/userVolumeMount: '{"tls-secret":{"mountPath":"/etc/istio/tls-certs/","readOnly":true}}'
creationTimestamp: null
labels:
app: dotnet-api
version: 0.1.1137
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchLabels:
app: dotnet-api
topologyKey: failure-domain.beta.kubernetes.io/zone
weight: 10
automountServiceAccountToken: true
containers:
image: dotnet-api
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /__health
port: 80
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 20
successThreshold: 1
timeoutSeconds: 5
name: dotnet-api
ports:
- containerPort: 8081
name: grpc
protocol: TCP
- containerPort: 80
name: http
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /__health
port: 80
scheme: HTTP
initialDelaySeconds: 15
periodSeconds: 15
successThreshold: 1
timeoutSeconds: 5
resources:
limits:
cpu: "3"
memory: 1500M
requests:
cpu: 50m
memory: 750M
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /elastic-certs
mountPropagation: None
name: elastic-certs
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
imagePullSecrets:
- name: gcr-credentials
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: dotnet-api
serviceAccountName: dotnet-api
shareProcessNamespace: false
terminationGracePeriodSeconds: 30
volumes:
- name: elastic-certs
secret:
defaultMode: 420
optional: false
secretName: app-ssl-es-http-certs-public
Service:
apiVersion: v1
kind: Service
metadata:
annotations:
cloud.google.com/neg: '{"exposed_ports":{"80":{},"8081":{}}}'
cloud.google.com/neg-status: '{"network_endpoint_groups":{"80":"k8s1-aa1c883d-namespace-dotnet-api-80-2855fd52","8081":"k8s1-aa1c883d-namespace-dotnet-api-8081-2082cadf"},"zones":["europe-west3-a","europe-west3-b","europe-west3-c"]}'
labels:
app: dotnet-api
version: 0.1.1137
name: dotnet-api
spec:
clusterIP: 10.82.96.194
clusterIPs:
- 10.82.96.194
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- appProtocol: HTTP2
name: grpc-dotnet-api
port: 8081
protocol: TCP
targetPort: 8081
- name: http-dotnet-api
port: 80
protocol: TCP
targetPort: 80
selector:
app: dotnet-api
sessionAffinity: None
type: ClusterIP
🚧 This issue or pull request has been closed due to not having had activity from an Istio team member since 2023-11-27. If you feel this issue or pull request deserves attention, please reopen the issue. Please see this wiki page for more information. Thank you for your contributions.
Created by the issue and PR lifecycle manager.
Is this the right place to submit this?
Bug Description
I'm trying to forward traffic to my pods from a GCP load balancer. HTTP requests work fine, but gRPC requests are failing.
GCP requires that gRPC backends accept TLS connections, so I'm trying to set up TLS termination on the sidecars using this guide.
When I make a gRPC request, I get the following:
I see the following log in the
istio-proxy
container:The sidecars I'm working with are on the latest version of Istio:
At this point I don't know what else I can configure, I don't see any logs telling me anything is particularly wrong.
Version
Additional Information
No response