Open chaochn47 opened 2 years ago
from reading the code, it seems etcd server was stuck in
It was stuck there because v2 health check handler returned etcdserver: server stopped
here and it was the reason of close(s.stopping)
It looks like fsync was stuck in the middle. Looked into 2022-07-10T19:22:09.843Z
- 1h48m54.55408532s
is around 2022-07-10T17:33:15.289Z
I expect despite of disk failures/latency, close(s.done)
will be called and process to be exited.
Much appreciate of any insights, thanks!!
Okay. I get a repro working by injecting sleeping at raftAfterSave failpoint followed by member removal.
The fifo scheduler stop is the culprit and getting stuck https://github.com/etcd-io/etcd/blob/72d3e382e73c1a2a4781f884fb64792af3242f22/pkg/schedule/schedule.go#L119-L125
Will dig in a little deeper to understand why..
The raft loop was stuck in the middle and apply routine is waiting for it.
I think the proper fix should be cancelling this apply successfully in the shutdown scenario even if disk write was stuck. Right now, the context is ignored.
Any thoughts? @ahrtr @serathius @spzala
Just a quick question before I have a deep dive, can you reproduce this on release-3.5 or main?
I think I can but I haven't yet tried this. Will report in a few mins on release-3.5 or main with the reproduce script shared.
Here is one for v3.4.18 reproduce.txt
Here is one for v3.5.4 reproduce-3.5.txt
Some side effect I observed.
2 leaders at a time but actually only one leader!
$ etcdctl endpoint status --cluster -w table
+------------------------+------------------+---------+-----------------+---------+----------------+-----------+------------+-----------+------------+--------------------+--------+
| ENDPOINT | ID | VERSION | STORAGE VERSION | DB SIZE | DB SIZE IN USE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+------------------------+------------------+---------+-----------------+---------+----------------+-----------+------------+-----------+------------+--------------------+--------+
| http://127.0.0.1:2379 | 8211f1d0f64f3269 | 3.5.4 | | 20 kB | 16 kB | false | false | 2 | 8 | 8 | |
| http://127.0.0.1:22379 | 91bc3c398fb3c146 | 3.5.4 | | 20 kB | 16 kB | true | false | 2 | 8 | 8 | |
| http://127.0.0.1:32379 | fd422379fda50e48 | 3.5.4 | | 20 kB | 16 kB | false | false | 2 | 8 | 8 | |
+------------------------+------------------+---------+-----------------+---------+----------------+-----------+------------+-----------+------------+--------------------+--------+
$ curl http://127.0.0.1:1234/etcdserver/raftAfterSave -XPUT -d'sleep(600000)'
$ etcdctl --endpoints http://127.0.0.1:22379,http://127.0.0.1:32379 member remove 8211f1d0f64f3269
Member 8211f1d0f64f3269 removed from cluster ef37ad9dc622a7c4
$ etcdctl endpoint status -w table --endpoints http://127.0.0.1:2379,http://127.0.0.1:22379,http://127.0.0.1:32379
+------------------------+------------------+---------+-----------------+---------+----------------+-----------+------------+-----------+------------+--------------------+--------+
| ENDPOINT | ID | VERSION | STORAGE VERSION | DB SIZE | DB SIZE IN USE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+------------------------+------------------+---------+-----------------+---------+----------------+-----------+------------+-----------+------------+--------------------+--------+
| http://127.0.0.1:2379 | 8211f1d0f64f3269 | 3.5.4 | | 20 kB | 16 kB | true | false | 2 | 8 | 8 | |
| http://127.0.0.1:22379 | 91bc3c398fb3c146 | 3.5.4 | | 20 kB | 16 kB | false | false | 3 | 10 | 10 | |
| http://127.0.0.1:32379 | fd422379fda50e48 | 3.5.4 | | 20 kB | 16 kB | true | false | 3 | 10 | 10 | |
+------------------------+------------------+---------+-----------------+---------+----------------+-----------+------------+-----------+------------+--------------------+--------+
To elaborate more, the symptom we got is a stale watch connection was not cleaned up (it is supposed to be) with member removal. So client cache was always outdated...
FAILPOINTS=1 ./build
needs to be updated to
FAILPOINTS=1 ./build.sh in release 3.5
It might not be safe to forcibly terminate the applying workflow.
The most feasible solution for now is to print log repeatedly at server.go#L930 and server.go#L978 so as to provide more visibility on the issue.
Please note that the etcdctl endpoint status
isn't an atomic operation, and it gets all members status one by one. If there is a lead changing in-between, then you may get two leaders in the end. So it could be expected behavior. But you should can only see this with very low possibility, and it should be a temporary "issue".
But you should can only see this with very low possibility, and it should be a temporary "issue"
Yeah, it only happens when disk IO is stuck in the middle. Usually it is caused by a data center outage.
FYI, We are deploying a fix to the local monitoring agent to forcibly stop the server given it's already removed from the membership.
However, it could have been done in etcd IMHO.
Looks more like a feature request to force etcd to shutdown after being removed from cluster. My first thought is that this should be part of admin operation to kill etcd on disk failure.
@chaochn47 Has this issue been resolved? I also encountered the problem of abnormal fluctuations in etcd, which is similar to your situation.
What happened?
etcd failed to stop and stuck in stopping state after it was removed from membership. It went unresponsive for any requests sent to it.
What did you expect to happen?
I expect etcd can graceful terminate itself.
How can we reproduce it (as minimally and precisely as possible)?
It was observed in an availability zone outage. The reproduce can be like the following
Here is a similar reproduce https://github.com/etcd-io/etcd/issues/13527 but does not have member removal fault injection.
Anything else we need to know?
many more apply request took too long because "error":"context canceled" and continued for almost 2 hours.
rafthttp pipelines termination
...
etcd server stopped with exit code 0
Etcd version (please run commands below)
Etcd configuration (command line flags or environment variables)
Etcd debug information (please run commands blow, feel free to obfuscate the IP address or FQDN in the output)
Relevant log output
No response