What did you do?
If possible, provide a recipe for reproducing the error.
Start drainer in a cluster with long history.
What did you expect to see?
Drainer can startup (even slowly).
What did you see instead?
OOM. The peak memory usage is up to 64G. Note that the GlobalID in the cluster is 4593562(which means there probably are millions of DDL jobs).
Please provide the relate downstream type and version of drainer.
(run drainer -V in terminal to get drainer's version)
v5.1.2
Notes
As a solution, we can use the CDC like method to resolve this: loading the snapshot and then send DDL jobs incrementally.
Or, we can paginate the scan of history DDL jobs for reducing the peak memory usage.
Bug Report
Please answer these questions before submitting your issue. Thanks!
From AskTUG: https://asktug.com/t/topic/274497
What did you do? If possible, provide a recipe for reproducing the error. Start drainer in a cluster with long history.
What did you expect to see? Drainer can startup (even slowly).
What did you see instead? OOM. The peak memory usage is up to 64G. Note that the
GlobalID
in the cluster is4593562
(which means there probably are millions of DDL jobs).Please provide the relate downstream type and version of drainer. (run
drainer -V
in terminal to get drainer's version) v5.1.2Notes As a solution, we can use the CDC like method to resolve this: loading the snapshot and then send DDL jobs incrementally. Or, we can paginate the scan of history DDL jobs for reducing the peak memory usage.