StarRocks / starrocks-connector-for-apache-flink

Apache License 2.0
195 stars 155 forks source link

Bug in using streamload to synchronize data to starrocks through seatunnel #321

Open cyd257666 opened 10 months ago

cyd257666 commented 10 months ago

Steps to reproduce the behavior (Required)

  1. source engine : GreenPlum
  2. seatunnel config info: { "sink": [{ "base-url": "jdbc:mysql://x.x.x.x:19030/dwd", "password": "xxx", "database": "dwd", "batch_max_rows": 64000, "max_retries": 10, "batch_max_bytes": 94371840, "nodeUrls": "[\"x.x.x.x:18030\"]", "plugin_name": "StarRocks", "table": "xxx", "username": "xxx", "batch_interval_ms": 10000 }], "source": [{ "password": "xxx", "driver": "org.postgresql.Driver", "query": "select column as Column from xxx_v", "plugin_name": "Jdbc", "user": "xxx", "url": "jdbc:postgresql://x.x.x.x:xxx/xxx" }], "env": { "job.mode": "BATCH", "execution.parallelism": "1" } }
  3. seatunnel engine is flink
  4. flink run

Expected behavior (Required)

All data synchronized to starrocks

Real behavior (Required)

As long as the fields containing uppercase letters are empty when creating the starrocks table We tried to import the data through ” insert into“, and all the data was normal Guess it's a bug with the streamload method

StarRocks version (Required)

StarRocks version 2.5.13

cyd257666 commented 10 months ago

I encounter this situation when importing through the starrocks connector using Flinkcdc and Flink Stream load requests sent through HTTP will not

cyd257666 commented 10 months ago

this is starrocks connector version

com.starrocks flink-connector-starrocks 1.2.7_flink-1.13_${scala.version}