Closed dodo920306 closed 1 month ago
Hello!
Thanks for your request. Can you clarify a bit, because I don't understand how it is possible, if your name doesn't match patroni name Ivory cannot understand that this is a part of the cluster.
I'm not quite sure how Ivory works.
However, let us say that I have 2 servers with Patroni working properly between them. The domain name of the first one is host1.example.com
while that of the other is host2.example.com
. They can also be numerical IPs, which is closer to my real situation.
In their configuration, /etc/patroni.yml
on themselves respectively, I set the value of the name
flag as node1
and node2
, both of which have nothing to do with their domain names.
According to YAML Configuration Settings, there is no specific restriction with the value of the name
flag, the name of the host, so this is totally legit except that it must be unique for the cluster, to which here we give an arbitrary name like postgres
through the value of the scope
flag on both hosts.
If I start Patroni now, with all the other configurations properly stated, I can run
$ patronictl -c /etc/patroni.yml list postgres
on either of the both hosts getting a pretty similar result like
+ Cluster: postgres (7277694203142172922) --+-----------+----+-----------+
| Member | Host | Role | State | TL | Lag in MB |
+--------+------------------------+---------+-----------+----+-----------+
| node1 | host1.example.com:5432 | Leader | running | 5 | |
| node2 | host2.example.com:5432 | Replica | streaming | 5 | 0 |
+--------+------------------------+---------+-----------+----+-----------+
Only after that, I import this cluster on Ivory on another host. I can use either of the domain names to import an instance before Ivory gets the rest of the hosts with this instance.
Thus, the page will look like
At this moment, everything is fine. I can click on the cluster, the instances, and the database schemas and see everything.
However, if I click on host1.example.com
, the leader, and select switchover
at the left, a window with "Make a switchover of host1.example.com" pops up, and if I click yes, the 412 error happens.
According to the logs of Patroni on host1.example.com
journalctl
shows, host1.example.com
indeed received this request, with leader=host1.example.com
, but the real leader name is node1
.
Therefore, I think there is a problem that while switchover (and failover), Ivory automatically takes the domain name as the leader name provided to the api instead of the real member name in the cluster.
ok, thanks for such a detailed clarification, yeah you are right. For now Ivory uses host as a name, because I always use same names for it and haven't even thought that patroni API requires the name :) I will think how to improve it, cannot say when it is going to be changed, but I will try my best.
Thanks! This project helped me a lot. I'm really looking forward to this improvement.
I've fixed the problem with switchover and failover, but I don't want displaying patroni.name in the UI. Let me know how frustrating this is for you.
P.S. fix is going to be release under v1.3.3
It's not frustrating at all. I tried deploying the latest version, and it's currently working well. Thanks for the quick fix and update.
Is your feature request related to a problem? Please describe. Please consider make instance names as same as the real patroni.name in the json returned from the REST api since it will make a switchover/failover of instances with names different from their IPs possible.
A mismatch between instance IPs and instance names in Patroni will cause a 412 error when trying to make a switchover/failover since the provided name is different from the real name of the leader.
Describe the solution you'd like Provide the real name of the leader instance in the requests of switchover/failover.
Describe alternatives you've considered None.
Additional context