Open liuxuzxx opened 1 week ago
For example, we now have a my-grpc-client which is the client of grpc, and my-grpc-server which is the server of grpc. The my-grpc-client configuration is as follows:
grpc:
client:
my-service:
address: dns:///my-grpc-serverhead-less.cpaas-dev:9013
negotiation-type: PLAINTEXT
enable-keep-alive: true
keep-alive-time: 30s
keep-alive-timeout: 5s
the my-grpc-server configuration is as follows:
grpc:
server:
port: 9013
the my-grpc-server's k8s headless service is
apiVersion: v1
kind: Service
metadata:
annotations:
meta.helm.sh/release-name: my-grpc
meta.helm.sh/release-namespace: cpaas-dev
creationTimestamp: "2024-11-11T05:57:01Z"
labels:
app: my-grpc-server
app.kubernetes.io/managed-by: Helm
env: dev
name: my-grpc-server-headless
namespace: cpaas-dev
resourceVersion: "534558927"
uid: 987cbfc3-30bf-4195-b931-03ab06dd7958
spec:
clusterIP: None
clusterIPs:
- None
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: http
port: 9012
protocol: TCP
targetPort: 9012
- name: monitoring
port: 8090
protocol: TCP
targetPort: 8090
- name: grpclb
port: 9013
protocol: TCP
targetPort: 9013
selector:
app: my-grpc-server
env: dev
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
After my-grpc-client continues to request my-grpc-server for a period of time, the number of Pods added to my-grpc-server increases from 2 to 3. We find that the newly added my-grpc-server pod will always accept If there is no request from my-grpc-client, a link to this new my-grpc-server pod will not be established unless my-grpc-client is restarted.
Does grpc-client-spring-boot-starter have the ability to sense changes in the number of k8s pods?
@ST-DDT Can you help me?
There is another question, why does each pod of my-grpc-client and my-grpc-server only establish one tcp link? Is there any parameter that can be configured? I checked GrpcChannelProperties and did not see any relevant configuration information.
Does grpc-client-spring-boot-starter have the ability to sense changes in the number of k8s pods?
Only if you watch the k8s api. Or maybe if the connection terminates? 🤔
why does each pod of my-grpc-client and my-grpc-server only establish one tcp link?
I think because grpc tries to limi the number of connections, but its basically an grpc internal that I cannot affect. Maybe ask this question upstream at grpc-java for a better answer or even an actual solution?!
Does grpc-client-spring-boot-starter have the ability to sense changes in the number of k8s pods?
Only if you watch the k8s api. Or maybe if the connection terminates? 🤔
Suppose I sense the expansion and contraction of pods through k8s api. How can I tell the IP of these perceived pods to the grpc-client-spring-boot-starter framework, and then prompt the framework to adjust the grpc dynamic load on the pod?
why does each pod of my-grpc-client and my-grpc-server only establish one tcp link?
I think because grpc tries to limi the number of connections, but its basically an grpc internal that I cannot affect. Maybe ask this question upstream at grpc-java for a better answer or even an actual solution?!
Ok thank you very much!
Only if you watch the k8s api. Or maybe if the connection terminates? 🤔
Suppose I sense the expansion and contraction of pods through k8s api. How can I tell the IP of these perceived pods to the grpc-client-spring-boot-starter framework, and then prompt the framework to adjust the grpc dynamic load on the pod?
You have to implement a Nameresolver similar to the DiscoveryClientNameResolver.
It would be cool, if you could contribute it to the project if you have implemented it.
Only if you watch the k8s api. Or maybe if the connection terminates? 🤔
Suppose I sense the expansion and contraction of pods through k8s api. How can I tell the IP of these perceived pods to the grpc-client-spring-boot-starter framework, and then prompt the framework to adjust the grpc dynamic load on the pod?
You have to implement a Nameresolver similar to the DiscoveryClientNameResolver.
It would be cool, if you could contribute it to the project if you have implemented it.
Thank you, I will explore this class
The context
What do you wish to achieve?
The question
Our applications are all on k8s, and communication between services uses k8s service or headless service. However, when we use the grpc protocol, we find that using the headlesss service can solve the load balancing of our grpc when it starts, and It works fine in the back. However, when the number of pods on the grpc server changes (capacity expansion), the grpc client pod does not establish a grpc link with the new pod, causing the new pod to be unable to be put into load.
Which versions do you use?