Open xy720 opened 3 years ago
We already finish stage 1 in pr https://github.com/apache/incubator-doris/pull/6289
The proformance we test is below:
Cores | Memory |
---|---|
96 | 360G |
1、Use mysqlslap as tool of MySQL for pressure testing.
Three source tables with the same structure in MySQL database (t1, t2, t3):
(avg row size 35 bytes)
+-------+---------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-------+---------+------+-----+---------+-------+
| a | int(11) | NO | PRI | NULL | |
| b | int(11) | YES | | NULL | |
+-------+---------+------+-----+---------+-------+
Three target tables of Doris database (test1, test2, test3)
+-------+------+------+-------+---------+---------+
| Field | Type | Null | Key | Default | Extra |
+-------+------+------+-------+---------+---------+
| a | INT | No | true | NULL | |
| b | INT | No | false | NULL | REPLACE |
+-------+------+------+-------+---------+---------+
stage | row | data size | cost(s) | tps avg | speed |
---|---|---|---|---|---|
mysql insert | 1000w | 120M | 74 | 135135 | 1.6M/s |
doris synchronization | 1000w | 120M | 81 | 123457 | 1.5M/s |
stage | row | data size | cost(s) | tps avg | speed |
---|---|---|---|---|---|
mysql insert | 1000w | 120M | 76 | 131578 | 1.6M/s |
doris synchronization | 1000w | 120M | 72 | 138889 | 1.7M/s |
stage | op | row | data size | cost(s) | tps avg | speed |
---|---|---|---|---|---|---|
mysql delete | delete | 10000000 | 120M | 30 | 33w | 4M/s |
doris synchronization | delete | 10000000 | 120M | 35 | 30w | 3.4M/s |
This feature is great! Do you still working on stage 2? @xy720
As we know, Binlog is the basic infrastructure in the Mysql Replication Architecture. The synchronization between replcations is carried out by reading and writing the binary log file (binlog) which stored on the mysql master server.
We know that in mysql cluster mode, only one replication is responsible for writing and the other replications are responsible for reading. Therefore, the repliactions' architecture is usually composed of one master (responsible for writing) and one or more slaves (responsible for reading).
All data changes on the master node will be firstly write into local binlog, then copied to the slave nodes.
On the master node, the binlog file are named like mysql-bin.000001, mysql-bin.000002... MySQL will automatically segment binlog logs.
On the slave node, the binlog file name and position (offset) will be saved as a file or table to locate the latest consumption location.
In order to get binlogs, the slave node sends the “MySQL binlog dump command” to the master node, and the dump thread of the master server will start to pushing binlog to the slave server continuously.
That is to say, we can get binlogs on the master node by forging this dump command. We can use Alibaba's Canal to achieve this goal.
Canal forge the dump protocol to disguise itself as a slave node to get and parse the master server's binlog log. Then it will store the parsed data in a ring queue in memory, waiting for other clients to subscribe and get it.
Therefore, with canal as the intermediary, Fe can get and synchronize the binlog logs on the master node. The blueprint for the first stage is below:
The work we need to do will be divided into two stages:
stage 1: 1、Support creating consumption job and data channel in Fe to get parsed data in canal. 2、Support increasingly synchronizing the data changed in MYSQL, and ensure that the data isn't lost and repeateded.
stage 2: 1、Support synchronizing and executing MySQL DDL statements. 2、Embedding the canal into Fe, reduce one serialization, and no longer to independently deploy canal server.