pingcap / tidb-binlog

A tool used to collect and merge tidb's binlog for real-time data backup and synchronization.
Apache License 2.0
292 stars 131 forks source link

drainer support relay log #842

Open july2993 opened 4 years ago

july2993 commented 4 years ago

Is your feature request related to a problem? Please describe:

When using drainer syncing data to TiDB directly, if the upstream cluster is down totally, the downstream may not reach a consistent status.

We define the downstream reach a consistent status with timestamp ts as:

Why drainer can’t guarantee that it can reach a consistent status is that will not write data into the downstream cluster in transactions one by one. It will split the transaction and write the downstream concurrently.

As an example, if there’s a transaction at upstream:

begin;
insert into test1(id) values(1);
insert into test1(id) values(2);
...
insert into test1(id) values(100);
commit

drainer will write the downstream in two transactions concurrently:

one is:

begin;
insert into test1(id) values(1);
insert into test1(id) values(2);
...
insert into test1(id) values(50);
commit;

another one is:

begin;
insert into test1(id) values(51);
insert into test1(id) values(52);
...
insert into test1(id) values(100);
commit;

when the upstream cluster is down and drainer quit when only writing a partial transaction into the downstream, there’s no way we can reach a consistent status anymore because it can not fetch data from upstream now.

Describe the feature you'd like:

drainer support option to open relay log when the dest-type is tidb or mysql, before writing the binlog data into the downstream TiDB it must persist binlog data first, so if the upstream cluster is down, it can use the local persistent data to reach a consistent status with a timestamp.

Describe the alternatives you've considered:

Don’t use drainer syncing data to downstream TiDB directly, instead, use drainer with _dest-type = kafka, _persist data into a downstream Kafka cluster, then use another tool like arbiter sync data to downstream TiDB Cluster.

Teachability, Documentation, Adoption, Migration Strategy:

Essential features

Add a config option _relay_logdir to drainer, when it’s not configured, it works like before.

when it’s configured, it must persist binlog data first before writing data to the downstream TiDB.

When the upstream is Down totally, we can start up drainer to reach a consistent status once the relay log data in the _relay_logdir is not lost.

Record format

we can use the same protobuf format binlog.proto currently drainer write into Kafka when the downstream type id Kafka. Note we can not directly persist the binlog in the format receiving from pump directly, because we will need some metadata to decode the binlog like knowing the table schema from a table id.

Purge Data

we can simply purge the data ASAP in relay_log_dir once it has been written to downstream.

Implementation

Phrase 1: Essential features of the relay log pkg.

For performance, we should batch records when persisting to filesystem using _fsync.

Phrase 2: Make drainer support the relay_log_dir option

Score

4500

References

TiDB binlog reference docs TiDB binlog source code reading the more related part of drainer not published yes, internal available only now

djshow832 commented 4 years ago

I want to join this task.

aylen commented 4 years ago

Is it similar to mysql replication, drainer reads the pump's binlog, generates a relay-log, and then rewrites the code synchronized to tidb or mysql,no longer rely on third-party tools

july2993 commented 4 years ago

Is it similar to mysql replication, drainer reads the pump's binlog, generates a relay-log, and then rewrites the code synchronized to tidb or mysql,no longer rely on third-party tools

It can replicate to tidb or mysql using the locally relay-log to reach a consistent status even the upstream is down totally. That means it must can read the relay-log only and replicate the data to tidb, is this what you means no longer rely on third-party tools ?

IANTHEREAL commented 4 years ago

Has the implementation scheme been determined?