ApsaraDB / PolarDB-for-PostgreSQL

A cloud-native database based on PostgreSQL developed by Alibaba Cloud.
https://apsaradb.github.io/PolarDB-for-PostgreSQL/zh/
Apache License 2.0
2.89k stars 480 forks source link

The configuration file used to build the three-node HA #46

Closed Mr-Diaolvyu closed 3 years ago

Mr-Diaolvyu commented 3 years ago

I want to build an HA cluster with three physical servers. Is the configuration correct?DDB1 dDB2 DDB3 is the host map configured to /etc/hosts. polardb_paxos.txt

yuwei-michael commented 3 years ago

When you use "onekey.sh all" or "pgxc_ctl -c $HOME/polardb/polardb_paxos.conf prepare standalone "command, it will generate default configure polardb_paxos.conf, it's will include 3 nodes(1 leader, 2 follower) HA in one physical servers. After that you just need change under 3 parameter from "localhost" to your IP in default configure file, when you want to build the three-node HA in different servers. By the way don't forget "set up authorized key for fast access" by ssh-copy-id for each your severs.

datanodeMasterServers=(localhost) # none means this master is not available. datanodeSlaveServers=(localhost) # value none means this slave is not available datanodeLearnerServers=(localhost) # value none means this learner is not available

——>

datanodeMasterServers=(Server1 IP) # none means this master is not available. datanodeSlaveServers=(Server2 IP) # value none means this slave is not available datanodeLearnerServers=(Server3 IP) # value none means this learner is not available

Mr-Diaolvyu commented 3 years ago

When you use "onekey.sh all" or "pgxc_ctl -c $HOME/polardb/polardb_paxos.conf prepare standalone "command, it will generate default configure polardb_paxos.conf, it's will include 3 nodes(1 leader, 2 follower) HA in one physical servers. After that you just need change under 3 parameter from "localhost" to your IP in default configure file, when you want to build the three-node HA in different servers. By the way don't forget "set up authorized key for fast access" by ssh-copy-id for each your severs.

datanodeMasterServers=(localhost) # none means this master is not available. datanodeSlaveServers=(localhost) # value none means this slave is not available datanodeLearnerServers=(localhost) # value none means this learner is not available

——>

datanodeMasterServers=(Server1 IP) # none means this master is not available. datanodeSlaveServers=(Server2 IP) # value none means this slave is not available datanodeLearnerServers=(Server3 IP) # value none means this learner is not available

I don't understand this configuration.If I want DDB1 to be the master, it's also the datanod.DDB2 and DDB3 are DataNodes.So my configuration should be as follows: `#---- Overall --------------- primaryDatanode=DDB1 # Primary Node. datanodeNames=(DDB1 DDB2 DDB3) datanodePorts=(10001 10001 10001) # Master and slave use the same port! datanodePoolerPorts=(10011 10011 10011) # Master and slave use the same port! datanodePgHbaEntries=(::1/128) # Assumes that all the coordinator (master/slave) accepts

the same connection

                                    # This list sets up pg_hba.conf for $pgxcOwner user.
                                    # If you'd like to setup other entries, supply them
                                    # through extra configuration files specified below.

datanodePgHbaEntries=(127.0.0.1/32) # Same as above but for IPv4 connections

---- Master ----------------

datanodeMasterServers=(DDB1 DDB2 DDB3) # none means this master is not available.

This means that there should be the master but is down.

                                                # The cluster is not operational until the master is
                                                # recovered and ready to run.   

datanodeMasterDirs=($datanodeMasterDir $datanodeMasterDir $datanodeMasterDir) datanodeMaxWalSender=5 # max_wal_senders: needed to configure slave. If zero value is

specified, it is expected this parameter is explicitly supplied

                                                # by external configuration files.
                                                # If you don't configure slaves, leave this value zero.

datanodeMaxWALSenders=($datanodeMaxWalSender $datanodeMaxWalSender $datanodeMaxWalSender)

max_wal_senders configuration for each datanode

---- Slave -----------------

datanodeSlave=y # Specify y if you configure at least one coordiantor slave. Otherwise, the following

configuration parameters will be set to empty values.

                    # If no effective server names are found (that is, every servers are specified as none),
                    # then datanodeSlave value will be set to n and all the following values will be set to
                    # empty values.

datanodeSlaveServers=(DDB1 DDB2 DDB3) # value none means this slave is not available datanodeSlavePorts=(10101 10101 10101) # Master and slave use the same port! datanodeSlavePoolerPorts=(10111 10111 10111) # Master and slave use the same port! datanodeSlaveSync=y # If datanode slave is connected in synchronized mode datanodeSlaveDirs=($datanodeSlaveDir $datanodeSlaveDir $datanodeSlaveDir) datanodeArchLogDirs=( $datanodeArchLogDir $datanodeArchLogDir $datanodeArchLogDir) datanodeRepNum=2 # no HA setting 0, streaming HA and active-active logcial replication setting 1 replication, paxos HA setting 2 replication.
datanodeSlaveType=(3 3 3) # 1 is streaming HA, 2 is active-active logcial replication, 3 paxos HA.

---- Learner -----------------

datanodeLearnerServers=(DDB1 DDB2 DDB3) # value none means this learner is not available datanodeLearnerPorts=(11001 11001 11001) # learner port!

datanodeSlavePoolerPorts=(11011) # learner pooler port!

datanodeLearnerSync=y # If datanode learner is connected in synchronized mode datanodeLearnerDirs=($datanodeLearnerDir $datanodeLearnerDir $datanodeLearnerDir)`

yuwei-michael commented 3 years ago

pgxc_ctl is main use for distribute cluster, but we still don't open source distribute db part for now. In distribute DB cluster, if we want to configure 3 DN(every DN include different data), you can configure like 'datanodeNames=(DDB1 DDB2 DDB3)', but your requirements is configure 3 nodes HA in single DN, this single DN is include 3 nodes, but this 3 nodes have same data, they are just replica for each other. So fix 3 IP is enough.

Mr-Diaolvyu commented 3 years ago

pgxc_ctl is main use for distribute cluster, but we still don't open source distribute db part for now. In distribute DB cluster, if we want to configure 3 DN(every DN include different data), you can configure like 'datanodeNames=(DDB1 DDB2 DDB3)', but your requirements is configure 3 nodes HA in single DN, this single DN is include 3 nodes, but this 3 nodes have same data, they are just replica for each other. So fix 3 IP is enough.

Hello, what I hope to achieve is that the three servers can store data in slices.And data replicas, somewhat like HDFS, are available?According to my current configuration, when I execute the command "pgbench-i --unlogged-tables -s 1000-p 10001-d pgbench" on one of these platforms, only data is written to the PG service on which the current server is located; nothing is written to the other two.

yuwei-michael commented 3 years ago

Hi Mr-Diaolvyu, We don't support slice data to node yet. Those feature will release in the next half year. Now we support 3 node for HA only, similar like streaming HA, all 3 node will have same data for HA data replicas.

Mr-Diaolvyu commented 3 years ago

嗨Diaolvyu先生,我们还不支持切片数据到节点。这些功能将在未来半年内发布。现在我们只支持 3 个节点用于 HA,类似于流式 HA,所有 3 个节点将具有相同的数据用于 HA 数据副本。

OK, could you please provide a sample of fully distributed configuration document for my reference? Thank you.

yuwei-michael commented 3 years ago

Hi Mr-Diaolvyu, Distributed feature still don't support for now. If you interest about the configure of distributed feature, you can check under link https://www.postgres-xl.org/documentation/pgxc-ctl.html for pgxc_ctl help.

Mr-Diaolvyu commented 3 years ago

嗨Diaolvyu先生,分布式功能目前还不支持。如果您对分布式功能的配置感兴趣,可以在链接https://www.postgres-xl.org/documentation/pgxc-ctl.html 下查看 pgxc_ctl 帮助。

ok,thanks

yuwei-michael commented 3 years ago

You are welcome!