Open qixiaoyang0 opened 1 month ago
Hi @qixiaoyang0 - Thanks for highlighting this. Adding some context, this performance impact was mentioned as part of the pr that introduced the line you mention. Refer https://github.com/etcd-io/etcd/pull/16822#issuecomment-1850490064.
The pr was required to fix https://github.com/etcd-io/etcd/issues/15247.
I am not sure if there will be much we can do to alleviate this currently, defer to @ahrtr and @serathius on any ideas for an alternative approach (if any).
Sending many lease update requests to the ETCD cluster, v3.5.11 version CPU will increase a lot.
Sending many lease update requests to the ETCD cluster, v3.5.11 version CPU will increase a lot.
- How did you send "lease update request"? Kept sending KeepAliveOnce?
- If yes (to above question), do you have a real use case to keep sending KeepAliveOnce? Have you considered call KeepAlive?
@ahrtr Thanks for the reply. I'm very sorry that my expression was unclear. "lease update request" means lease renew. Our method of renewing lease is neither KeepAliveOnce nor KeepAlive. Instead, we use keepalive developed based on grpc stream. For etcd server, this is equivalent to KeepAlive.
This problem is because of this modification
What was your method of deduction? Did you do any profiling?
Instead, we use keepalive developed based on grpc stream.
Thanks for the response, but I do not quite understand this. Please feel free to ping me on slack (K8s workspace).
This problem is because of this modification
What was your method of deduction? Did you do any profiling?
My test method is:
The CPU growth was not discovered using standard test cases, but was discovered in our production environment. In our cluster, we observed through log statistics that the lease renew rate is 1,500 times per second. . I think it can be reproduced using benchmark tests.
Check the CPU usage through pprof and find that the main growth hot spots are rafthttp.(streamReader).run and rafthttp.(streamWriter).run
In metrics, the growth rates of etcd_network_peer_received_bytes_total and etcd_network_peer_sent_bytes_total are an order of magnitude higher than those of versions before V3.5.13.
In etcdserver.(*EtcdServer).LeaseRenew, I deleted ensureLeadership, recompiled the binary file, and replaced it with the cluster. The CPU usage dropped to the value before version V3.5.13.
In etcdserver.(*EtcdServer).LeaseRenew, I deleted ensureLeadership
makes me wonder why it increases the CPU usage so much, it merely waits on read state notification.
edit: etcd_network_peer_sent_bytes_total are an order of magnitude higher
proto parsing? :)
In etcdserver.(*EtcdServer).LeaseRenew, I deleted ensureLeadership
makes me wonder why it increases the CPU usage so much, it merely waits on read state notification. edit:
etcd_network_peer_sent_bytes_total are an order of magnitude higher
proto parsing? :)
In a cluster where the lease refresh request rate is stable, I first collect metrics once and record the values of etcd_network_peer_sent_bytes_total
and etcd_network_peer_received_bytes_total
. After 10 minutes, I collect metrics and record the values. In this way, I can guide the sending rate and receiving rate.
Comparing the version before v3.5.13 with the v3.5.13 version, the sent rate and received rate change: 9,523/s to 43,034/s
Usually users don't need to renew a lease too frequently. The KeepAlive sends the LeaseKeepAliveRequest
every TTL/3 seconds, can you follow the similar pattern? If not, can you please explain why?
In order to avoid long back-and-forth communication, please feel free to ping me this week.
Usually users don't need to renew a lease too frequently. The KeepAlive sends the
LeaseKeepAliveRequest
every TTL/3 seconds, can you follow the similar pattern? If not, can you please explain why?In order to avoid long back-and-forth communication, please feel free to ping me this week.
Yes, the client does not need to send renew requests to the server too frequently, but there are more than 1,000 services in our system, and most of the lease TTL are set to 3 or 4, so there will be 1,500 renew requests per second on the server.
Thank you for your attention, but I can't find your email address or other message methods. How can I contact you?
@ahrtr I'd like to help in figuring out this issue, let me know if and how I can help. Thanks.
CC: @qixiaoyang0
@ahrtr I'd like to help in figuring out this issue, let me know if and how I can help. Thanks.
CC: @qixiaoyang0
Thanks. @vivekpatani. It'd better to get the following clarified before we close the ticket.
The reduce (around 20%) of lease renew QPS is expected as mentioned in comment, but the impact on the CPU usage isn't evaluated in the first place. But it seems a bit counterintuitive, because #16822 just added some network I/O.
We mainly need to understand the use case.
As mentioned in comment, the change in #16822 has no any impact on K8s, and the performance on lease renew isn't that critical, so most likely we won't change anything for this ticket. But still happy to get above items clarified before we close this ticket.
ack, will post results in st. @ahrtr
Bug report criteria
What happened?
In the ETCD cluster I tested, the lease renew rate is 1500/s, arm64 CPU, and the CPU usage of the leader node increased by 20%.
This problem is because of this modification https://github.com/etcd-io/etcd/blob/bb701b9265f31d61db5906325e0a7e2abf7d3627/server/etcdserver/v3_server.go#L288
We conducted rigorous tests, including checking pprof, collecting metrics, deleting that line of code and recompiling etcd, and confirmed that this was the reason for the CPU increase.
What did you expect to happen?
It may not be necessary to add leader confirmation in lease renew. Or using a more performant method.
How can we reproduce it (as minimally and precisely as possible)?
Sending many lease update requests to the ETCD cluster, v3.5.11 version CPU will increase a lot.
Anything else we need to know?
No response
Etcd version (please run commands below)
v3.5.11
Etcd configuration (command line flags or environment variables)
Etcd debug information (please run commands below, feel free to obfuscate the IP address or FQDN in the output)
Relevant log output
No response