Open Shige99011 opened 2 years ago
Thank you for this question, @Shige99011. Your configuration for Consul seems to be right. I'm not certain how your application itself behaves, but is there a potential issue with this mismatch of targeting the containerPort: 65005
while the SERVICE_PORT
is 65001
? I'm just seeing that in your client
service and deployment.
Hey @Shige99011
If you're using explicit upstreams, i.e. providing the connect-service-upstreams
annotation, then you'll need to address your upstream service via localhost (set SERVER_ADDRESS
to localhost
).
Since you're enabling tproxy, I'd recommend omitting this annotation and seeing if that fixes your issue. If you omit it, you can use kube DNS names directly.
Thanks guys for your reply. I could confirm the communication by grpc has worked between the services anyway. yes, upstream annotations are not needed as transparent proxy is enabled but it seems to work even if there is that.
By the way, is it possible for envoy proxy to configure multiple ports? I would like to configure a service to have 2 kinds of port. (e.g. one is for grpc, another one is for http). The service has rest api for outside of service mesh and has grpc communication with the other services within the service mesh. According to the doc, it seems to be able to configure only one protocol for a service as ServiceDefaults, though..
Hey @Shige99011
For multi-port support, we currently have a workaround documented here: https://www.consul.io/docs/k8s/connect#kubernetes-pods-with-multiple-ports
Thanks for the info. But it doesn't work yet although I tried it.
The error says:
For http(rest) connection:
System.Net.Http.HttpRequestException: Connection refused (127.0.0.1:1234)
For grpc connection:
Grpc.Core.RpcException: Status(StatusCode="Unavailable", Detail="Error starting gRPC call. HttpRequestException: Connection refused (127.0.0.1:2234) SocketException: Connection refused", DebugException="System.Net.Http.HttpRequestException: Connection refused (127.0.0.1:2234)
My trial is as below. Is there any wrong or missing? Thanks for your help.
ServiceDefaults configuration:
# apiVersion: consul.hashicorp.com/v1alpha1
# kind: ServiceDefaults
# metadata:
# name: client
# spec:
# protocol: tcp
# ---
apiVersion: consul.hashicorp.com/v1alpha1
kind: ServiceDefaults
metadata:
name: grpcserver
spec:
protocol: grpc
---
apiVersion: consul.hashicorp.com/v1alpha1
kind: ServiceDefaults
metadata:
name: grpcserver2
spec:
protocol: http
Deployment configuration:
apiVersion: v1
kind: ServiceAccount
metadata:
name: grpcserver
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: grpcserver2
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: client
---
apiVersion: v1
kind: Service
metadata:
name: grpcserver
spec:
selector:
app: grpcserver
ports:
- port: 80
targetPort: 65001
name: grpc
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: grpcserver2
spec:
selector:
app: grpcserver
ports:
- port: 80
targetPort: 65000
name: http
protocol: TCP
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: grpcserver
name: grpcserver
spec:
replicas: 1
selector:
matchLabels:
app: grpcserver
template:
metadata:
annotations:
'consul.hashicorp.com/connect-inject': 'true'
'consul.hashicorp.com/transparent-proxy': 'false'
'consul.hashicorp.com/connect-service': 'grpcserver,grpcserver2'
'consul.hashicorp.com/connect-service-port': '65001,65000'
labels:
app: grpcserver
spec:
containers:
- name: grpcserver
image: grpcserver:latest
imagePullPolicy: Never #Need to pull from local registry?
ports:
- containerPort: 65001
name: grpc
env:
- name: 'GRPC_PORT'
value: "65001"
- name: grpcserver2
image: grpcserver:latest
imagePullPolicy: Never #Need to pull from local registry?
ports:
- containerPort: 65000
name: http
env:
- name: 'HTTP_PORT'
value: "65000"
serviceAccountName: grpcserver
---
apiVersion: v1
kind: Service
metadata:
name: client
spec:
selector:
app: client
ports:
- port: 80
targetPort: 65005
name: grpc
protocol: TCP
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: client
name: client
spec:
replicas: 1
selector:
matchLabels:
app: client
template:
metadata:
annotations:
'consul.hashicorp.com/connect-inject': 'true'
'consul.hashicorp.com/transparent-proxy': 'false'
'consul.hashicorp.com/connect-service-upstreams': "grpcserver:2234,grpcserver2:1234"
labels:
app: client
spec:
containers:
- name: client
image: client:latest
imagePullPolicy: Never #Need to pull from local registry?
ports:
- containerPort: 65005
env:
- name: 'SERVER_ADDRESS'
value: '127.0.0.1'
- name: 'SERVICE_PORT'
value: '2234'
- name: 'SERVICE_PORT_REST'
value: '1234'
The consul configuration:
# Choose an optional name for the datacenter
global:
name: consul
client:
enabled: true
grpc: true
exposeGossipPorts: true
# Enable the Consul Web UI via a NodePort
ui:
enabled: true
service:
enabled: true
type: 'NodePort'
# Enable Connect for secure communication between nodes
connectInject:
enabled: true
transparentProxy:
defaultEnabled: false
connect:
enabled: true
# Enable CRD Controller
controller:
enabled: true
# Automatically registers services in kubernetes to consul
syncCatalog:
enabled: true
toConsul: true
toK8S: true
# Use only one Consul server for local development
server:
enabled: true
replicas: 1
bootstrapExpect: 1
disruptionBudget:
enabled: true
maxUnavailable: 0
Hi, I would like to configure consul connect to use gRPC protocol between services within the service mesh. But the sending request from a service to another seems to fail. How should it be configured?
Here is what I'm doing:
There are 2 services and they want to communicate by using gRPC. Service grpcserver: This is listening at the port 65001 with http2 protocol. Service client: This opens the channel for gRPC with http://grpcserver:65001. Then communicates with grpcserver. These are working on Docker compose environment at least but not working on Kubernetes with consul.
Here is my configuration for consul:
And the services are deployed with:
For each service deployment:
This is the log form client service:
What is wrong or missed?
My consul environment is helm: 0.41.1 consul: 1.11.3 Kubernetes: k3s version v1.21.3+k3s1 (1d1f220f)
Thanks for your help.