pingcap / tiflow

This repo maintains DM (a data migration platform) and TiCDC (change data capture for TiDB)
Apache License 2.0
416 stars 273 forks source link

Kafka consumer OOM easily #9919

Open fubinzh opened 8 months ago

fubinzh commented 8 months ago

What did you do?

  1. Create kafka changefeed
    /cdc  cli  changefeed  create "--server=127.0.0.1:8301" "--sink-uri=kafka://downstream-kafka.cdc-testbed-tps-3210577-1-861:9092/cdc-event-open-protocol-realtime?max-message-bytes=1048576&protocol=open-protocol&replication-factor=2" "--changefeed-id=kafka-open-protocol-task-1"
  2. Run workload
    sysbench --db-driver=mysql --mysql-host=`nslookup upstream-tidb.cdc-testbed-tps-3210577-1-861 | awk -F: '{print $2}' | awk 'NR==5' | sed s/[[:space:]]//g`  --mysql-port=4000 --mysql-user=root --mysql-db=workload --tables=32 --table-size=500000 --create_secondary=off --debug=true --threads=32 --mysql-ignore-errors=2013,1213,1105,1205,8022,8027,8028,9004,9007,1062 oltp_write_only prepare
    sysbench --db-driver=mysql --mysql-host=`nslookup upstream-tidb.cdc-testbed-tps-3210577-1-861 | awk -F: '{print $2}' | awk 'NR==5' | sed s/[[:space:]]//g`  --mysql-port=4000 --mysql-user=root --mysql-db=workload --tables=32 --table-size=500000 --create_secondary=off --time=600 --debug=true --threads=32 --mysql-ignore-errors=2013,1213,1105,1205,8022,8027,8028,9004,9007,1062 oltp_write_only run
  3. Run kafka consumer to consumer the workload

What did you expect to see?

  1. Kafka consumer should not OOM

What did you see instead?

  1. With memory limit configured to 32GB, Kafka consumer was killed due to too much memory used.

image

Versions of the cluster

master

fubinzh commented 8 months ago

/severity moderate