apache / doris

Apache Doris is an easy-to-use, high performance and unified analytics database.
https://doris.apache.org
Apache License 2.0
12.76k stars 3.28k forks source link

[Proposal] Support synchronizing MySQL binlog in real time #6287

Open xy720 opened 3 years ago

xy720 commented 3 years ago

As we know, Binlog is the basic infrastructure in the Mysql Replication Architecture. The synchronization between replcations is carried out by reading and writing the binary log file (binlog) which stored on the mysql master server.

We know that in mysql cluster mode, only one replication is responsible for writing and the other replications are responsible for reading. Therefore, the repliactions' architecture is usually composed of one master (responsible for writing) and one or more slaves (responsible for reading).

All data changes on the master node will be firstly write into local binlog, then copied to the slave nodes.

  1. On the master node, the binlog file are named like mysql-bin.000001, mysql-bin.000002... MySQL will automatically segment binlog logs.

  2. On the slave node, the binlog file name and position (offset) will be saved as a file or table to locate the latest consumption location.

---------------------                                ---------------------
|       Slave       |                                |      Master       |
|                   |              read              |                   |
| FileName/Position | <<<--------------------------- |    Binlog Files   |
---------------------                                ---------------------

In order to get binlogs, the slave node sends the “MySQL binlog dump command” to the master node, and the dump thread of the master server will start to pushing binlog to the slave server continuously.

-----------                                             ------------
|  Slave  |                                             |  Master  |
-----------                                             ------------
     |                      1、连接                            |
     | --------------------------------------------------->>> |
     |                      2、握手                            |
     | <<<--------------------------------------------------- |
     |                      3、验证身份                         |
     | --------------------------------------------------->>> |
     |                      4、注册成功                         |
     | <<<----------------------------------------------------|
     |                      5、Mysql dump                     |
     | --------------------------------------------------->>> |

That is to say, we can get binlogs on the master node by forging this dump command. We can use Alibaba's Canal to achieve this goal.

Canal forge the dump protocol to disguise itself as a slave node to get and parse the master server's binlog log. Then it will store the parsed data in a ring queue in memory, waiting for other clients to subscribe and get it.

Therefore, with canal as the intermediary, Fe can get and synchronize the binlog logs on the master node. The blueprint for the first stage is below:

                                               ------------------------------------------------------          --------------      ------
                                               | ---------- ---------> channel 1 ---------> table1  |          |            | ---> | BE |
             Binlog                    Get     | | client | ---------> channel 2 ---------> table2  |          | Cordinator |      ------
---------- ---------> ----------- ------------>| ---------- ---------> channel 3 ---------> table3  | -------> |            |      ------
|  Mysql |            |  canal  |      Ack     |                                                    |          |     BE     | ---> | BE |
|        |            |         | <------------|  FE                                                |          |            |      ------
----------            -----------              ------------------------------------------------------          --------------

The work we need to do will be divided into two stages:

stage 1: 1、Support creating consumption job and data channel in Fe to get parsed data in canal. 2、Support increasingly synchronizing the data changed in MYSQL, and ensure that the data isn't lost and repeateded.

stage 2: 1、Support synchronizing and executing MySQL DDL statements. 2、Embedding the canal into Fe, reduce one serialization, and no longer to independently deploy canal server.

xy720 commented 3 years ago

We already finish stage 1 in pr https://github.com/apache/incubator-doris/pull/6289

The proformance we test is below:

Environment

Cores Memory
96 360G

Test Scenarios

1、Use mysqlslap as tool of MySQL for pressure testing.

Three source tables with the same structure in MySQL database (t1, t2, t3): 
(avg row size 35 bytes)
+-------+---------+------+-----+---------+-------+
| Field | Type    | Null | Key | Default | Extra |
+-------+---------+------+-----+---------+-------+
| a     | int(11) | NO   | PRI | NULL    |       |
| b     | int(11) | YES  |     | NULL    |       |
+-------+---------+------+-----+---------+-------+
Three target tables of Doris database (test1, test2, test3)
+-------+------+------+-------+---------+---------+
| Field | Type | Null | Key   | Default | Extra   |
+-------+------+------+-------+---------+---------+
| a     | INT  | No   | true  | NULL    |         |
| b     | INT  | No   | false | NULL    | REPLACE |
+-------+------+------+-------+---------+---------+

Test Result

scene 1

  1. start-up canal
  2. create sync job and start synchronizing
  3. Perform insert operations in three source tables in Mysql
stage row data size cost(s) tps avg speed
mysql insert 1000w 120M 74 135135 1.6M/s
doris synchronization 1000w 120M 81 123457 1.5M/s

scene 2

  1. start-up canal
  2. Perform insert operations in three source tables in Mysql
  3. create sync job and start synchronizing
stage row data size cost(s) tps avg speed
mysql insert 1000w 120M 76 131578 1.6M/s
doris synchronization 1000w 120M 72 138889 1.7M/s

scene 3

  1. create sync job and start synchronizing
  2. Delete data from three source tables
stage op row data size cost(s) tps avg speed
mysql delete delete 10000000 120M 30 33w 4M/s
doris synchronization delete 10000000 120M 35 30w 3.4M/s
awesomeleo commented 10 months ago

This feature is great! Do you still working on stage 2? @xy720