DTStack / chunjun

A data integration framework
https://dtstack.github.io/chunjun/
Apache License 2.0
4k stars 1.69k forks source link

chunjun执行mysql-mysql insert模式下问题 #1175

Open biandou1313 opened 2 years ago

biandou1313 commented 2 years ago

Search before asking

What happened

问题一:insert 模式下 主键重复 任务直接终止 问题二:insert模式下 配置脏数据存储 也没有用 任务终止 ,张数据表也无数据

What you expected to happen

1、insert模式下 mysql-mysql 主键重复直接统计到 统计指标中 2、脏数据入库 且不会终止任务

How to reproduce

1、执行jSON { "job": { "content": [ { "reader": { "parameter": { "password": "123456", "dataSourceId": 8, "column": [ { "precision": 4, "name": "id", "columnDisplaySize": 4, "type": "INT" }, { "precision": 255, "name": "name", "columnDisplaySize": 255, "type": "VARCHAR" }, { "precision": 4, "name": "age", "columnDisplaySize": 4, "type": "TINYINT" }, { "precision": 255, "name": "phone", "columnDisplaySize": 255, "type": "VARCHAR" } ], "connection": [ { "jdbcUrl": [ "jdbc:mysql://172.18.8.77:3306/zk_test" ], "table": [ "one" ] } ], "increColumn":"id", "splitPk": "id", "username": "root" }, "name": "mysqlreader" }, "writer": { "parameter": { "password": "123456", "dataSourceId": 8, "column": [ { "precision": 4, "name": "id", "columnDisplaySize": 4, "type": "INT" }, { "precision": 255, "name": "name", "columnDisplaySize": 255, "type": "VARCHAR" }, { "precision": 4, "name": "age", "columnDisplaySize": 4, "type": "TINYINT" }, { "precision": 255, "name": "phone", "columnDisplaySize": 255, "type": "VARCHAR" } ], "connection": [ { "jdbcUrl": "jdbc:mysql://172.18.8.77:3306/zk_test", "table": [ "one_copy1" ] } ], "writeMode": "insert", "username": "root" }, "name": "mysqlwriter" } } ], "setting": { "log": { "isLogger": false }, "errorLimit": {}, "speed": { "bytes": 0, "channel": 1 } } } }

2、脏数据配置 {\"start-chunjun.dirty-data.output-type\":\"jdbc\",\"start-chunjun.dirty-data.max-rows\":\"1000\",\"start-chunjun.dirty-data.max-collect-failed-rows\":\"100\",\"start-chunjun.dirty-data.jdbc.url\":\"jdbc:mysql://172.18.8.77:3306/zk_test\",\"start-chunjun.dirty-data.jdbc.username\":\"root\",\"start-chunjun.dirty-data.jdbc.password\":\"123456\",\"start-chunjun.dirty-data.jdbc.table\":\"chunjun_dirty_data\",\"start-chunjun.dirty-data.jdbc.batch-size\":\"10\",\"start-chunjun.dirty-data.log.print-interval\":\"10\"}

Anything else

No response

Version

master

Are you willing to submit PR?

Code of Conduct

biandou1313 commented 2 years ago

2022-08-23 11:22:17,512 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - -------------------------------------------------------------------------------- 2022-08-23 11:22:17,516 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - Preconfiguration: 2022-08-23 11:22:17,516 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] -

TM_RESOURCE_PARAMS extraction logs: jvm_params: -Xmx8294655496 -Xms8294655496 -XX:MaxDirectMemorySize=1207959552 -XX:MaxMetaspaceSize=268435456 dynamic_configs: -D taskmanager.memory.framework.off-heap.size=134217728b -D taskmanager.memory.network.max=1073741824b -D taskmanager.memory.network.min=1073741824b -D taskmanager.memory.framework.heap.size=134217728b -D taskmanager.memory.managed.size=6335076856b -D taskmanager.cpu.cores=50.0 -D taskmanager.memory.task.heap.size=8160437768b -D taskmanager.memory.task.off-heap.size=0b -D taskmanager.memory.jvm-metaspace.size=268435456b -D taskmanager.memory.jvm-overhead.max=1073741824b -D taskmanager.memory.jvm-overhead.min=1073741824b logs: INFO [] - Loading configuration property: jobmanager.rpc.address, hadoop02 INFO [] - Loading configuration property: taskmanager.numberOfTaskSlots, 50 INFO [] - Loading configuration property: web.submit.enable, true INFO [] - Loading configuration property: jobmanager.rpc.port, 6123 INFO [] - Loading configuration property: jobmanager.memory.process.size, 2g INFO [] - Loading configuration property: taskmanager.memory.process.size, 16g INFO [] - Loading configuration property: parallelism.default, 5 INFO [] - Loading configuration property: jobmanager.execution.failover-strategy, region WARN [] - Error while trying to split key and value in configuration file /opt/flink/flink-1.12.2/conf/flink-conf.yaml:13: "env.java.home:/usr/java/jdk1.8.0_261" INFO [] - Loading configuration property: jobmanager.archive.fs.dir, hdfs://hadoop02:8020/flink/completed-jobs/ INFO [] - Loading configuration property: historyserver.web.address, 0.0.0.0 INFO [] - Loading configuration property: historyserver.archive.fs.refresh-interval, 10000 INFO [] - Loading configuration property: historyserver.web.port, 8082 INFO [] - Loading configuration property: historyserver.archive.fs.dir, hdfs://hadoop02:8020/flink/completed-jobs/ INFO [] - Loading configuration property: state.backend, filesystem INFO [] - Loading configuration property: state.backend.fs.checkpointdir, hdfs://hadoop02:8020/flink-checkpoints INFO [] - Loading configuration property: high-availability, zookeeper INFO [] - Loading configuration property: high-availability.storageDir, hdfs://hadoop02:8020/flink/ha/ INFO [] - Loading configuration property: high-availability.zookeeper.quorum, hadoop01:2181,hadoop02:2181,hadoop03:2181 INFO [] - Loading configuration property: metrics.reporter.promgateway.class, org.apache.flink.metrics.prometheus.PrometheusPushGatewayReporter INFO [] - Loading configuration property: metrics.reporter.promgateway.host, 172.18.8.211 INFO [] - Loading configuration property: metrics.reporter.promgateway.port, 9091 INFO [] - Loading configuration property: metrics.reporter.promgateway.jobName, flink-metrics INFO [] - Loading configuration property: metrics.reporter.promgateway.randomJobNameSuffix, true INFO [] - Loading configuration property: metrics.reporter.promgateway.deleteOnShutdown, false INFO [] - Loading configuration property: metrics.reporter.promgateway.interval, 30 SECONDS INFO [] - The derived from fraction jvm overhead memory (1.600gb (1717986944 bytes)) is greater than its max value 1024.000mb (1073741824 bytes), max value will be used instead INFO [] - The derived from fraction network memory (1.475gb (1583769214 bytes)) is greater than its max value 1024.000mb (1073741824 bytes), max value will be used instead INFO [] - Final TaskExecutor Memory configuration: INFO [] - Total Process Memory: 16.000gb (17179869184 bytes) INFO [] - Total Flink Memory: 14.750gb (15837691904 bytes) INFO [] - Total JVM Heap Memory: 7.725gb (8294655496 bytes) INFO [] - Framework: 128.000mb (134217728 bytes) INFO [] - Task: 7.600gb (8160437768 bytes) INFO [] - Total Off-heap Memory: 7.025gb (7543036408 bytes) INFO [] - Managed: 5.900gb (6335076856 bytes) INFO [] - Total JVM Direct Memory: 1.125gb (1207959552 bytes) INFO [] - Framework: 128.000mb (134217728 bytes) INFO [] - Task: 0 bytes INFO [] - Network: 1024.000mb (1073741824 bytes) INFO [] - JVM Metaspace: 256.000mb (268435456 bytes) INFO [] - JVM Overhead: 1024.000mb (1073741824 bytes)

2022-08-23 11:22:17,516 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - -------------------------------------------------------------------------------- 2022-08-23 11:22:17,516 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - Starting TaskManager (Version: 1.12.7, Scala: 2.12, Rev:88d9950, Date:2021-12-14T23:39:33+01:00) 2022-08-23 11:22:17,516 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - OS current user: hadoop 2022-08-23 11:22:17,715 WARN org.apache.hadoop.util.NativeCodeLoader [] - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2022-08-23 11:22:17,764 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - Current Hadoop/Kerberos user: hadoop 2022-08-23 11:22:17,765 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - JVM: Java HotSpot(TM) 64-Bit Server VM - Oracle Corporation - 1.8/25.261-b12 2022-08-23 11:22:17,765 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - Maximum heap size: 7912 MiBytes 2022-08-23 11:22:17,765 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - JAVA_HOME: /usr/java/jdk1.8.0_261 2022-08-23 11:22:17,766 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - Hadoop version: 2.8.3 2022-08-23 11:22:17,766 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - JVM Options: 2022-08-23 11:22:17,766 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - -XX:+UseG1GC 2022-08-23 11:22:17,766 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - -Xmx8294655496 2022-08-23 11:22:17,766 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - -Xms8294655496 2022-08-23 11:22:17,766 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - -XX:MaxDirectMemorySize=1207959552 2022-08-23 11:22:17,766 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - -XX:MaxMetaspaceSize=268435456 2022-08-23 11:22:17,766 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - -Dlog.file=/opt/flink/flink-1.12.2/log/flink-hadoop-taskexecutor-1-hadoop03.log 2022-08-23 11:22:17,766 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - -Dlog4j.configuration=file:/opt/flink/flink-1.12.2/conf/log4j.properties 2022-08-23 11:22:17,766 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - -Dlog4j.configurationFile=file:/opt/flink/flink-1.12.2/conf/log4j.properties 2022-08-23 11:22:17,766 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - -Dlogback.configurationFile=file:/opt/flink/flink-1.12.2/conf/logback.xml 2022-08-23 11:22:17,767 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - Program Arguments: 2022-08-23 11:22:17,768 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - --configDir 2022-08-23 11:22:17,768 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - /opt/flink/flink-1.12.2/conf 2022-08-23 11:22:17,768 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - -D 2022-08-23 11:22:17,768 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - taskmanager.memory.framework.off-heap.size=134217728b 2022-08-23 11:22:17,768 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - -D 2022-08-23 11:22:17,768 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - taskmanager.memory.network.max=1073741824b 2022-08-23 11:22:17,768 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - -D 2022-08-23 11:22:17,768 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - taskmanager.memory.network.min=1073741824b 2022-08-23 11:22:17,768 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - -D 2022-08-23 11:22:17,768 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - taskmanager.memory.framework.heap.size=134217728b 2022-08-23 11:22:17,768 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - -D 2022-08-23 11:22:17,768 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - taskmanager.memory.managed.size=6335076856b 2022-08-23 11:22:17,768 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - -D 2022-08-23 11:22:17,768 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - taskmanager.cpu.cores=50.0 2022-08-23 11:22:17,768 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - -D 2022-08-23 11:22:17,768 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - taskmanager.memory.task.heap.size=8160437768b 2022-08-23 11:22:17,768 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - -D 2022-08-23 11:22:17,768 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - taskmanager.memory.task.off-heap.size=0b 2022-08-23 11:22:17,768 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - -D 2022-08-23 11:22:17,768 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - taskmanager.memory.jvm-metaspace.size=268435456b 2022-08-23 11:22:17,768 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - -D 2022-08-23 11:22:17,769 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - taskmanager.memory.jvm-overhead.max=1073741824b 2022-08-23 11:22:17,769 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - -D 2022-08-23 11:22:17,769 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - taskmanager.memory.jvm-overhead.min=1073741824b 2022-08-23 11:22:17,769 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - Classpath: /opt/flink/flink-1.12.2/lib/flink-csv-1.12.7.jar:/opt/flink/flink-1.12.2/lib/flink-json-1.12.7.jar:/opt/flink/flink-1.12.2/lib/flink-metrics-prometheus-1.12.2.jar:/opt/flink/flink-1.12.2/lib/flink-metrics-prometheus-1.12.7.jar:/opt/flink/flink-1.12.2/lib/flink-shaded-hadoop-2-uber-2.8.3-10.0.jar:/opt/flink/flink-1.12.2/lib/flink-shaded-zookeeper-3.4.14.jar:/opt/flink/flink-1.12.2/lib/flink-table_2.12-1.12.7.jar:/opt/flink/flink-1.12.2/lib/flink-table-blink_2.12-1.12.7.jar:/opt/flink/flink-1.12.2/lib/log4j-1.2-api-2.16.0.jar:/opt/flink/flink-1.12.2/lib/log4j-api-2.16.0.jar:/opt/flink/flink-1.12.2/lib/log4j-core-2.16.0.jar:/opt/flink/flink-1.12.2/lib/log4j-slf4j-impl-2.16.0.jar:/opt/flink/flink-1.12.2/lib/flink-dist_2.12-1.12.7.jar:/opt/hadoop/hadoop-2.7.4/etc/hadoop:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/commons-compress-1.4.1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/commons-cli-1.2.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/jettison-1.1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/curator-framework-2.7.1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/commons-digester-1.8.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/httpclient-4.2.5.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/jersey-server-1.9.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/mockito-all-1.8.5.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/commons-httpclient-3.1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/jersey-core-1.9.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/xmlenc-0.52.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/jersey-json-1.9.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/curator-client-2.7.1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/avro-1.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/commons-net-3.1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/log4j-1.2.17.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/gson-2.2.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/hamcrest-core-1.3.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/commons-io-2.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/commons-configuration-1.6.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/activation-1.1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/jets3t-0.9.0.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/jetty-util-6.1.26.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/commons-collections-3.2.2.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/zookeeper-3.4.6.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/jsch-0.1.54.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/commons-math3-3.1.1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/servlet-api-2.5.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/commons-logging-1.1.3.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/jsr305-3.0.0.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/xz-1.0.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/jetty-sslengine-6.1.26.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/guava-11.0.2.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/httpcore-4.2.5.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/hadoop-auth-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/junit-4.11.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/paranamer-2.3.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/netty-3.6.2.Final.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/jsp-api-2.1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/asm-3.2.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/stax-api-1.0-2.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/commons-codec-1.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/hadoop-annotations-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/jetty-6.1.26.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/htrace-core-3.1.0-incubating.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/commons-lang-2.6.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/hadoop-nfs-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/hadoop-common-2.7.4-tests.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/hadoop-common-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/commons-io-2.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/guava-11.0.2.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/asm-3.2.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/htrace-core-3.1.0-incubating.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/hadoop-hdfs-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/hadoop-hdfs-2.7.4-tests.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/hadoop-hdfs-nfs-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/guice-3.0.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/commons-cli-1.2.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/jettison-1.1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/jersey-server-1.9.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/jersey-core-1.9.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/jersey-json-1.9.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/log4j-1.2.17.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/commons-io-2.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/activation-1.1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/javax.inject-1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/jersey-client-1.9.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/servlet-api-2.5.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/xz-1.0.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/guava-11.0.2.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/asm-3.2.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/aopalliance-1.0.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/commons-codec-1.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/jetty-6.1.26.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/commons-lang-2.6.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-client-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-common-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-api-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-tests-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-common-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-registry-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/guice-3.0.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/javax.inject-1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/xz-1.0.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/junit-4.11.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/asm-3.2.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/hadoop-annotations-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.4-tests.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/contrib/capacity-scheduler/*.jar:/opt/hadoop/hadoop-2.7.4/etc/hadoop: 2022-08-23 11:22:17,769 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - -------------------------------------------------------------------------------- 2022-08-23 11:22:17,770 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - Registered UNIX signal handlers for [TERM, HUP, INT] 2022-08-23 11:22:17,772 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - Maximum number of open file descriptors is 1000000. 2022-08-23 11:22:17,778 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: jobmanager.rpc.address, hadoop02 2022-08-23 11:22:17,779 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: taskmanager.numberOfTaskSlots, 50 2022-08-23 11:22:17,779 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: web.submit.enable, true 2022-08-23 11:22:17,779 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: jobmanager.rpc.port, 6123 2022-08-23 11:22:17,779 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: jobmanager.memory.process.size, 2g 2022-08-23 11:22:17,779 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: taskmanager.memory.process.size, 16g 2022-08-23 11:22:17,779 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: parallelism.default, 5 2022-08-23 11:22:17,779 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: jobmanager.execution.failover-strategy, region 2022-08-23 11:22:17,779 WARN org.apache.flink.configuration.GlobalConfiguration [] - Error while trying to split key and value in configuration file /opt/flink/flink-1.12.2/conf/flink-conf.yaml:13: "env.java.home:/usr/java/jdk1.8.0_261" 2022-08-23 11:22:17,779 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: jobmanager.archive.fs.dir, hdfs://hadoop02:8020/flink/completed-jobs/ 2022-08-23 11:22:17,779 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: historyserver.web.address, 0.0.0.0 2022-08-23 11:22:17,779 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: historyserver.archive.fs.refresh-interval, 10000 2022-08-23 11:22:17,779 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: historyserver.web.port, 8082 2022-08-23 11:22:17,780 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: historyserver.archive.fs.dir, hdfs://hadoop02:8020/flink/completed-jobs/ 2022-08-23 11:22:17,780 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: state.backend, filesystem 2022-08-23 11:22:17,780 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: state.backend.fs.checkpointdir, hdfs://hadoop02:8020/flink-checkpoints 2022-08-23 11:22:17,780 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: high-availability, zookeeper 2022-08-23 11:22:17,780 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: high-availability.storageDir, hdfs://hadoop02:8020/flink/ha/ 2022-08-23 11:22:17,780 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: high-availability.zookeeper.quorum, hadoop01:2181,hadoop02:2181,hadoop03:2181 2022-08-23 11:22:17,780 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: metrics.reporter.promgateway.class, org.apache.flink.metrics.prometheus.PrometheusPushGatewayReporter 2022-08-23 11:22:17,780 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: metrics.reporter.promgateway.host, 172.18.8.211 2022-08-23 11:22:17,780 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: metrics.reporter.promgateway.port, 9091 2022-08-23 11:22:17,780 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: metrics.reporter.promgateway.jobName, flink-metrics 2022-08-23 11:22:17,780 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: metrics.reporter.promgateway.randomJobNameSuffix, true 2022-08-23 11:22:17,780 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: metrics.reporter.promgateway.deleteOnShutdown, false 2022-08-23 11:22:17,780 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: metrics.reporter.promgateway.interval, 30 SECONDS 2022-08-23 11:22:17,884 INFO org.apache.flink.runtime.security.modules.HadoopModule [] - Hadoop user set to hadoop (auth:SIMPLE) 2022-08-23 11:22:17,894 INFO org.apache.flink.runtime.security.modules.JaasModule [] - Jaas file will be created as /tmp/jaas-7284308405517075844.conf. 2022-08-23 11:22:18,301 INFO org.apache.flink.runtime.blob.FileSystemBlobStore [] - Creating highly available BLOB storage directory at hdfs://hadoop02:8020/flink/ha/default/blob 2022-08-23 11:22:18,345 INFO org.apache.flink.runtime.util.ZooKeeperUtils [] - Enforcing default ACL for ZK connections 2022-08-23 11:22:18,345 INFO org.apache.flink.runtime.util.ZooKeeperUtils [] - Using '/flink/default' as Zookeeper namespace. 2022-08-23 11:22:18,372 INFO org.apache.flink.shaded.curator4.org.apache.curator.utils.Compatibility [] - Running in ZooKeeper 3.4.x compatibility mode 2022-08-23 11:22:18,372 INFO org.apache.flink.shaded.curator4.org.apache.curator.utils.Compatibility [] - Using emulated InjectSessionExpiration 2022-08-23 11:22:18,389 INFO org.apache.flink.shaded.curator4.org.apache.curator.framework.imps.CuratorFrameworkImpl [] - Starting 2022-08-23 11:22:18,394 INFO org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.ZooKeeper [] - Client environment:zookeeper.version=3.4.14-4c25d480e66aadd371de8bd2fd8da255ac140bcf, built on 03/06/2019 16:18 GMT 2022-08-23 11:22:18,395 INFO org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.ZooKeeper [] - Client environment:host.name=hadoop03 2022-08-23 11:22:18,395 INFO org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.ZooKeeper [] - Client environment:java.version=1.8.0_261 2022-08-23 11:22:18,395 INFO org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.ZooKeeper [] - Client environment:java.vendor=Oracle Corporation 2022-08-23 11:22:18,395 INFO org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.ZooKeeper [] - Client environment:java.home=/usr/java/jdk1.8.0_261/jre 2022-08-23 11:22:18,395 INFO org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.ZooKeeper [] - Client environment:java.class.path=/opt/flink/flink-1.12.2/lib/flink-csv-1.12.7.jar:/opt/flink/flink-1.12.2/lib/flink-json-1.12.7.jar:/opt/flink/flink-1.12.2/lib/flink-metrics-prometheus-1.12.2.jar:/opt/flink/flink-1.12.2/lib/flink-metrics-prometheus-1.12.7.jar:/opt/flink/flink-1.12.2/lib/flink-shaded-hadoop-2-uber-2.8.3-10.0.jar:/opt/flink/flink-1.12.2/lib/flink-shaded-zookeeper-3.4.14.jar:/opt/flink/flink-1.12.2/lib/flink-table_2.12-1.12.7.jar:/opt/flink/flink-1.12.2/lib/flink-table-blink_2.12-1.12.7.jar:/opt/flink/flink-1.12.2/lib/log4j-1.2-api-2.16.0.jar:/opt/flink/flink-1.12.2/lib/log4j-api-2.16.0.jar:/opt/flink/flink-1.12.2/lib/log4j-core-2.16.0.jar:/opt/flink/flink-1.12.2/lib/log4j-slf4j-impl-2.16.0.jar:/opt/flink/flink-1.12.2/lib/flink-dist_2.12-1.12.7.jar:/opt/hadoop/hadoop-2.7.4/etc/hadoop:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/commons-compress-1.4.1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/commons-cli-1.2.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/jettison-1.1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/curator-framework-2.7.1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/commons-digester-1.8.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/httpclient-4.2.5.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/jersey-server-1.9.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/mockito-all-1.8.5.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/commons-httpclient-3.1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/jersey-core-1.9.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/xmlenc-0.52.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/jersey-json-1.9.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/curator-client-2.7.1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/avro-1.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/commons-net-3.1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/log4j-1.2.17.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/gson-2.2.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/hamcrest-core-1.3.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/commons-io-2.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/commons-configuration-1.6.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/activation-1.1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/jets3t-0.9.0.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/jetty-util-6.1.26.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/commons-collections-3.2.2.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/zookeeper-3.4.6.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/jsch-0.1.54.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/commons-math3-3.1.1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/servlet-api-2.5.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/commons-logging-1.1.3.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/jsr305-3.0.0.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/xz-1.0.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/jetty-sslengine-6.1.26.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/guava-11.0.2.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/httpcore-4.2.5.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/hadoop-auth-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/junit-4.11.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/paranamer-2.3.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/netty-3.6.2.Final.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/jsp-api-2.1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/asm-3.2.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/stax-api-1.0-2.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/commons-codec-1.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/hadoop-annotations-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/jetty-6.1.26.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/htrace-core-3.1.0-incubating.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/commons-lang-2.6.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/hadoop-nfs-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/hadoop-common-2.7.4-tests.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/hadoop-common-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/commons-io-2.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/guava-11.0.2.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/asm-3.2.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/htrace-core-3.1.0-incubating.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/hadoop-hdfs-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/hadoop-hdfs-2.7.4-tests.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/hadoop-hdfs-nfs-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/guice-3.0.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/commons-cli-1.2.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/jettison-1.1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/jersey-server-1.9.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/jersey-core-1.9.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/jersey-json-1.9.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/log4j-1.2.17.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/commons-io-2.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/activation-1.1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/javax.inject-1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/jersey-client-1.9.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/servlet-api-2.5.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/xz-1.0.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/guava-11.0.2.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/asm-3.2.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/aopalliance-1.0.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/commons-codec-1.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/jetty-6.1.26.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/commons-lang-2.6.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-client-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-common-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-api-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-tests-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-common-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-registry-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/guice-3.0.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/javax.inject-1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/xz-1.0.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/junit-4.11.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/asm-3.2.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/hadoop-annotations-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.4-tests.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/contrib/capacity-scheduler/*.jar:/opt/hadoop/hadoop-2.7.4/etc/hadoop: 2022-08-23 11:22:18,395 INFO org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.ZooKeeper [] - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib 2022-08-23 11:22:18,395 INFO org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.ZooKeeper [] - Client environment:java.io.tmpdir=/tmp 2022-08-23 11:22:18,395 INFO org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.ZooKeeper [] - Client environment:java.compiler= 2022-08-23 11:22:18,395 INFO org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.ZooKeeper [] - Client environment:os.name=Linux 2022-08-23 11:22:18,395 INFO org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.ZooKeeper [] - Client environment:os.arch=amd64 2022-08-23 11:22:18,395 INFO org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.ZooKeeper [] - Client environment:os.version=3.10.0-957.el7.x86_64 2022-08-23 11:22:18,395 INFO org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.ZooKeeper [] - Client environment:user.name=hadoop 2022-08-23 11:22:18,395 INFO org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.ZooKeeper [] - Client environment:user.home=/home/hadoop 2022-08-23 11:22:18,395 INFO org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.ZooKeeper [] - Client environment:user.dir=/home/hadoop 2022-08-23 11:22:18,396 INFO org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.ZooKeeper [] - Initiating client connection, connectString=hadoop01:2181,hadoop02:2181,hadoop03:2181 sessionTimeout=60000 watcher=org.apache.flink.shaded.curator4.org.apache.curator.ConnectionState@1efdcd5 2022-08-23 11:22:18,405 WARN org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.ClientCnxn [] - SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/tmp/jaas-7284308405517075844.conf'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. 2022-08-23 11:22:18,406 INFO org.apache.flink.shaded.curator4.org.apache.curator.framework.imps.CuratorFrameworkImpl [] - Default schema 2022-08-23 11:22:18,406 INFO org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.ClientCnxn [] - Opening socket connection to server hadoop02/172.18.8.207:2181 2022-08-23 11:22:18,407 ERROR org.apache.flink.shaded.curator4.org.apache.curator.ConnectionState [] - Authentication failed 2022-08-23 11:22:18,407 INFO org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.ClientCnxn [] - Socket connection established to hadoop02/172.18.8.207:2181, initiating session 2022-08-23 11:22:18,412 INFO org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.ClientCnxn [] - Session establishment complete on server hadoop02/172.18.8.207:2181, sessionid = 0x200dc18313f0002, negotiated timeout = 40000 2022-08-23 11:22:18,413 INFO org.apache.flink.shaded.curator4.org.apache.curator.framework.state.ConnectionStateManager [] - State change: CONNECTED 2022-08-23 11:22:18,435 INFO org.apache.flink.runtime.leaderretrieval.DefaultLeaderRetrievalService [] - Starting DefaultLeaderRetrievalService with ZookeeperLeaderRetrievalDriver{retrievalPath='/leader/resource_manager_lock'}. 2022-08-23 11:22:18,435 INFO org.apache.flink.runtime.util.LeaderRetrievalUtils [] - Trying to select the network interface and address to use by connecting to the leading JobManager. 2022-08-23 11:22:18,435 INFO org.apache.flink.runtime.util.LeaderRetrievalUtils [] - TaskManager will try to connect for PT10S before falling back to heuristics 2022-08-23 11:22:19,090 INFO org.apache.flink.runtime.net.ConnectionUtils [] - Trying to connect to address hadoop02/172.18.8.207:46317 2022-08-23 11:22:19,093 INFO org.apache.flink.runtime.leaderretrieval.DefaultLeaderRetrievalService [] - Stopping DefaultLeaderRetrievalService. 2022-08-23 11:22:19,093 INFO org.apache.flink.runtime.leaderretrieval.ZooKeeperLeaderRetrievalDriver [] - Closing ZookeeperLeaderRetrievalDriver{retrievalPath='/leader/resource_manager_lock'}. 2022-08-23 11:22:19,093 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - TaskManager will use hostname/address 'hadoop03' (172.18.8.208) for communication. 2022-08-23 11:22:19,102 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcServiceUtils [] - Trying to start actor system, external address 172.18.8.208:0, bind address 0.0.0.0:0. 2022-08-23 11:22:19,618 INFO akka.event.slf4j.Slf4jLogger [] - Slf4jLogger started 2022-08-23 11:22:19,639 INFO akka.remote.Remoting [] - Starting remoting 2022-08-23 11:22:19,734 INFO akka.remote.Remoting [] - Remoting started; listening on addresses :[akka.tcp://flink@172.18.8.208:32813] 2022-08-23 11:22:19,833 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcServiceUtils [] - Actor system started at akka.tcp://flink@172.18.8.208:32813 2022-08-23 11:22:19,862 WARN org.apache.flink.runtime.metrics.ReporterSetup [] - Multiple implementations of the same reporter were found in 'lib' and/or 'plugins' directories for org.apache.flink.metrics.prometheus.PrometheusReporterFactory. It is recommended to remove redundant reporter JARs to resolve used versions' ambiguity. 2022-08-23 11:22:19,862 WARN org.apache.flink.runtime.metrics.ReporterSetup [] - Multiple implementations of the same reporter were found in 'lib' and/or 'plugins' directories for org.apache.flink.metrics.prometheus.PrometheusPushGatewayReporterFactory. It is recommended to remove redundant reporter JARs to resolve used versions' ambiguity. 2022-08-23 11:22:19,870 INFO org.apache.flink.metrics.prometheus.PrometheusPushGatewayReporter [] - Configured PrometheusPushGatewayReporter with {host:172.18.8.211, port:9091, jobName:flink-metricsdbb57d43256781ca47e7c449a3899d88, randomJobNameSuffix:true, deleteOnShutdown:false, groupingKey:{}} 2022-08-23 11:22:19,872 INFO org.apache.flink.runtime.metrics.MetricRegistryImpl [] - Periodically reporting metrics in intervals of 30 s for reporter promgateway of type org.apache.flink.metrics.prometheus.PrometheusPushGatewayReporter. 2022-08-23 11:22:19,875 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcServiceUtils [] - Trying to start actor system, external address 172.18.8.208:0, bind address 0.0.0.0:0. 2022-08-23 11:22:19,888 INFO akka.event.slf4j.Slf4jLogger [] - Slf4jLogger started 2022-08-23 11:22:19,892 INFO akka.remote.Remoting [] - Starting remoting 2022-08-23 11:22:19,897 INFO akka.remote.Remoting [] - Remoting started; listening on addresses :[akka.tcp://flink-metrics@172.18.8.208:36045] 2022-08-23 11:22:19,935 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcServiceUtils [] - Actor system started at akka.tcp://flink-metrics@172.18.8.208:36045 2022-08-23 11:22:19,946 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcService [] - Starting RPC endpoint for org.apache.flink.runtime.metrics.dump.MetricQueryService at akka://flink-metrics/user/rpc/MetricQueryService_172.18.8.208:32813-f64c27 . 2022-08-23 11:22:19,956 INFO org.apache.flink.runtime.blob.PermanentBlobCache [] - Created BLOB cache storage directory /tmp/blobStore-10495150-da0b-4da2-add0-4061000e782a 2022-08-23 11:22:19,959 INFO org.apache.flink.runtime.blob.TransientBlobCache [] - Created BLOB cache storage directory /tmp/blobStore-4de2ec53-416f-4104-a459-9aebef15cef6 2022-08-23 11:22:19,961 INFO org.apache.flink.runtime.externalresource.ExternalResourceUtils [] - Enabled external resources: [] 2022-08-23 11:22:19,961 INFO org.apache.flink.runtime.externalresource.ExternalResourceUtils [] - Enabled external resources: [] 2022-08-23 11:22:19,961 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - Starting TaskManager with ResourceID: 172.18.8.208:32813-f64c27 2022-08-23 11:22:19,986 INFO org.apache.flink.runtime.taskexecutor.TaskManagerServices [] - Temporary file directory '/tmp': total 445 GB, usable 429 GB (96.40% usable) 2022-08-23 11:22:19,988 INFO org.apache.flink.runtime.io.disk.FileChannelManagerImpl [] - FileChannelManager uses directory /tmp/flink-io-a80f275e-ab0d-4bcd-b9d6-6473b82576b5 for spill files. 2022-08-23 11:22:19,995 INFO org.apache.flink.runtime.io.network.netty.NettyConfig [] - NettyConfig [server address: /0.0.0.0, server port: 0, ssl enabled: false, memory segment size (bytes): 32768, transport type: AUTO, number of server threads: 50 (manual), number of client threads: 50 (manual), server connect backlog: 0 (use Netty's default), client connect timeout (sec): 120, send/receive buffer size (bytes): 0 (use Netty's default)] 2022-08-23 11:22:19,997 INFO org.apache.flink.runtime.io.disk.FileChannelManagerImpl [] - FileChannelManager uses directory /tmp/flink-netty-shuffle-0c270f98-d525-4d54-9ba1-8de8ed65d000 for spill files. 2022-08-23 11:22:20,536 INFO org.apache.flink.runtime.io.network.buffer.NetworkBufferPool [] - Allocated 1024 MB for network buffer pool (number of memory segments: 32768, bytes per segment: 32768). 2022-08-23 11:22:20,547 INFO org.apache.flink.runtime.io.network.NettyShuffleEnvironment [] - Starting the network environment and its components. 2022-08-23 11:22:20,599 INFO org.apache.flink.runtime.io.network.netty.NettyClient [] - Transport type 'auto': using EPOLL. 2022-08-23 11:22:20,600 INFO org.apache.flink.runtime.io.network.netty.NettyClient [] - Successful initialization (took 53 ms). 2022-08-23 11:22:20,611 INFO org.apache.flink.runtime.io.network.netty.NettyServer [] - Transport type 'auto': using EPOLL. 2022-08-23 11:22:20,638 INFO org.apache.flink.runtime.io.network.netty.NettyServer [] - Successful initialization (took 35 ms). Listening on SocketAddress /0:0:0:0:0:0:0:0%0:42174. 2022-08-23 11:22:20,638 INFO org.apache.flink.runtime.taskexecutor.KvStateService [] - Starting the kvState service and its components. 2022-08-23 11:22:20,657 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcService [] - Starting RPC endpoint for org.apache.flink.runtime.taskexecutor.TaskExecutor at akka://flink/user/rpc/taskmanager_0 . 2022-08-23 11:22:20,670 INFO org.apache.flink.runtime.leaderretrieval.DefaultLeaderRetrievalService [] - Starting DefaultLeaderRetrievalService with ZookeeperLeaderRetrievalDriver{retrievalPath='/leader/resource_manager_lock'}. 2022-08-23 11:22:20,671 INFO org.apache.flink.runtime.taskexecutor.DefaultJobLeaderService [] - Start job leader service. 2022-08-23 11:22:20,672 INFO org.apache.flink.runtime.filecache.FileCache [] - User file cache uses directory /tmp/flink-dist-cache-6cbcb904-78e7-4a17-ac00-02a7a41a2071 2022-08-23 11:22:20,675 INFO org.apache.flink.runtime.taskexecutor.TaskExecutor [] - Connecting to ResourceManager akka.tcp://flink@hadoop02:46317/user/rpc/resourcemanager_0(b7dd1eb6fe16f6f0bff19fc261e44990). 2022-08-23 11:22:20,923 INFO org.apache.flink.runtime.taskexecutor.TaskExecutor [] - Resolved ResourceManager address, beginning registration 2022-08-23 11:22:20,990 INFO org.apache.flink.runtime.taskexecutor.TaskExecutor [] - Successful registration at resource manager akka.tcp://flink@hadoop02:46317/user/rpc/resourcemanager_0 under registration id 21734e303143f0758e4d2c50f7b8eb15. 2022-08-23 11:22:44,044 INFO org.apache.flink.runtime.taskexecutor.TaskExecutor [] - Receive slot request b20ccfa7fc8e8a291ea144cd5ac4e841 for job 62179b92f6a8bf69378e6a7e872335d2 from resource manager with leader id b7dd1eb6fe16f6f0bff19fc261e44990. 2022-08-23 11:22:44,049 INFO org.apache.flink.runtime.taskexecutor.TaskExecutor [] - Allocated slot for b20ccfa7fc8e8a291ea144cd5ac4e841. 2022-08-23 11:22:44,050 INFO org.apache.flink.runtime.taskexecutor.DefaultJobLeaderService [] - Add job 62179b92f6a8bf69378e6a7e872335d2 for job leader monitoring. 2022-08-23 11:22:44,051 INFO org.apache.flink.runtime.leaderretrieval.DefaultLeaderRetrievalService [] - Starting DefaultLeaderRetrievalService with ZookeeperLeaderRetrievalDriver{retrievalPath='/leader/62179b92f6a8bf69378e6a7e872335d2/job_manager_lock'}. 2022-08-23 11:22:44,055 INFO org.apache.flink.runtime.taskexecutor.DefaultJobLeaderService [] - Try to register at job manager akka.tcp://flink@hadoop02:46317/user/rpc/jobmanager_2 with leader id e53cafdd-5ce2-4edc-b03e-3787a9c2d105. 2022-08-23 11:22:44,066 INFO org.apache.flink.runtime.taskexecutor.DefaultJobLeaderService [] - Resolved JobManager address, beginning registration 2022-08-23 11:22:44,080 INFO org.apache.flink.runtime.taskexecutor.DefaultJobLeaderService [] - Successful registration at job manager akka.tcp://flink@hadoop02:46317/user/rpc/jobmanager_2 for job 62179b92f6a8bf69378e6a7e872335d2. 2022-08-23 11:22:44,081 INFO org.apache.flink.runtime.taskexecutor.TaskExecutor [] - Establish JobManager connection for job 62179b92f6a8bf69378e6a7e872335d2. 2022-08-23 11:22:44,083 INFO org.apache.flink.runtime.taskexecutor.TaskExecutor [] - Offer reserved slots to the leader of job 62179b92f6a8bf69378e6a7e872335d2. 2022-08-23 11:22:44,099 INFO org.apache.flink.runtime.taskexecutor.slot.TaskSlotTableImpl [] - Activate slot b20ccfa7fc8e8a291ea144cd5ac4e841. 2022-08-23 11:22:44,111 INFO org.apache.flink.runtime.taskexecutor.slot.TaskSlotTableImpl [] - Activate slot b20ccfa7fc8e8a291ea144cd5ac4e841. 2022-08-23 11:22:44,141 INFO org.apache.flink.runtime.taskexecutor.TaskExecutor [] - Received task Sink: mysqlsinkfactory (1/1)#0 (66b10534e7de1e665f457ae26de845bf), deploy into slot with allocation id b20ccfa7fc8e8a291ea144cd5ac4e841. 2022-08-23 11:22:44,142 INFO org.apache.flink.runtime.taskmanager.Task [] - Sink: mysqlsinkfactory (1/1)#0 (66b10534e7de1e665f457ae26de845bf) switched from CREATED to DEPLOYING. 2022-08-23 11:22:44,145 INFO org.apache.flink.runtime.taskmanager.Task [] - Loading JAR files for task Sink: mysqlsinkfactory (1/1)#0 (66b10534e7de1e665f457ae26de845bf) [DEPLOYING]. 2022-08-23 11:22:44,147 INFO org.apache.flink.runtime.taskexecutor.slot.TaskSlotTableImpl [] - Activate slot b20ccfa7fc8e8a291ea144cd5ac4e841. 2022-08-23 11:22:44,156 INFO org.apache.flink.runtime.taskexecutor.TaskExecutor [] - Received task Source: mysqlsourcefactory (1/1)#0 (dbb488235bfc9269c1948b00ee8e7188), deploy into slot with allocation id b20ccfa7fc8e8a291ea144cd5ac4e841. 2022-08-23 11:22:44,156 INFO org.apache.flink.runtime.taskmanager.Task [] - Source: mysqlsourcefactory (1/1)#0 (dbb488235bfc9269c1948b00ee8e7188) switched from CREATED to DEPLOYING. 2022-08-23 11:22:44,156 INFO org.apache.flink.runtime.taskmanager.Task [] - Loading JAR files for task Source: mysqlsourcefactory (1/1)#0 (dbb488235bfc9269c1948b00ee8e7188) [DEPLOYING]. 2022-08-23 11:22:44,300 INFO org.apache.flink.runtime.taskmanager.Task [] - Registering task at network: Source: mysqlsourcefactory (1/1)#0 (dbb488235bfc9269c1948b00ee8e7188) [DEPLOYING]. 2022-08-23 11:22:44,300 INFO org.apache.flink.runtime.taskmanager.Task [] - Registering task at network: Sink: mysqlsinkfactory (1/1)#0 (66b10534e7de1e665f457ae26de845bf) [DEPLOYING]. 2022-08-23 11:22:44,304 INFO org.apache.flink.runtime.taskmanager.Task [] - Obtaining local cache file for 'class_path_3'. 2022-08-23 11:22:44,304 INFO org.apache.flink.runtime.taskmanager.Task [] - Obtaining local cache file for 'class_path_3'. 2022-08-23 11:22:44,305 INFO org.apache.flink.runtime.taskmanager.Task [] - Obtaining local cache file for 'class_path_2'. 2022-08-23 11:22:44,305 INFO org.apache.flink.runtime.taskmanager.Task [] - Obtaining local cache file for 'class_path_2'. 2022-08-23 11:22:44,305 INFO org.apache.flink.runtime.taskmanager.Task [] - Obtaining local cache file for 'class_path_1'. 2022-08-23 11:22:44,305 INFO org.apache.flink.runtime.taskmanager.Task [] - Obtaining local cache file for 'class_path_1'. 2022-08-23 11:22:44,306 INFO org.apache.flink.runtime.taskmanager.Task [] - Obtaining local cache file for 'class_path_0'. 2022-08-23 11:22:44,306 INFO org.apache.flink.runtime.taskmanager.Task [] - Obtaining local cache file for 'class_path_0'. 2022-08-23 11:22:44,330 INFO org.apache.flink.streaming.runtime.tasks.StreamTask [] - Using job/cluster config to configure application-defined state backend: File State Backend (checkpoints: 'hdfs://hadoop02:8020/flink-checkpoints', savepoints: 'null', asynchronous: TRUE, fileStateThreshold: 20480) 2022-08-23 11:22:44,330 INFO org.apache.flink.streaming.runtime.tasks.StreamTask [] - Using job/cluster config to configure application-defined state backend: File State Backend (checkpoints: 'hdfs://hadoop02:8020/flink-checkpoints', savepoints: 'null', asynchronous: TRUE, fileStateThreshold: 20480) 2022-08-23 11:22:44,330 INFO org.apache.flink.streaming.runtime.tasks.StreamTask [] - Using application-defined state backend: File State Backend (checkpoints: 'hdfs://hadoop02:8020/flink-checkpoints', savepoints: 'null', asynchronous: TRUE, fileStateThreshold: 20480) 2022-08-23 11:22:44,330 INFO org.apache.flink.streaming.runtime.tasks.StreamTask [] - Using application-defined state backend: File State Backend (checkpoints: 'hdfs://hadoop02:8020/flink-checkpoints', savepoints: 'null', asynchronous: TRUE, fileStateThreshold: 20480) 2022-08-23 11:22:44,340 INFO org.apache.flink.runtime.taskmanager.Task [] - Sink: mysqlsinkfactory (1/1)#0 (66b10534e7de1e665f457ae26de845bf) switched from DEPLOYING to RUNNING. 2022-08-23 11:22:44,340 INFO org.apache.flink.runtime.taskmanager.Task [] - Source: mysqlsourcefactory (1/1)#0 (dbb488235bfc9269c1948b00ee8e7188) switched from DEPLOYING to RUNNING. 2022-08-23 11:22:44,439 INFO com.dtstack.chunjun.sink.DtOutputFormatSinkFunction [] - Start initialize output format state 2022-08-23 11:22:44,439 INFO com.dtstack.chunjun.source.DtInputFormatSourceFunction [] - Start initialize input format state, is restored:false 2022-08-23 11:22:44,472 INFO com.dtstack.chunjun.source.DtInputFormatSourceFunction [] - End initialize input format state 2022-08-23 11:22:44,472 INFO com.dtstack.chunjun.sink.DtOutputFormatSinkFunction [] - Is restored:false 2022-08-23 11:22:44,472 INFO com.dtstack.chunjun.sink.DtOutputFormatSinkFunction [] - End initialize output format state 2022-08-23 11:22:44,597 INFO com.dtstack.chunjun.metrics.prometheus.PrometheusReport [] - Configured PrometheusPushGatewayReporter with {host:172.18.8.211, port:9091, jobName: flink-metrics5f1c40630d6cc0ec7b3b23eb13c4d7ff, randomJobNameSuffix:true, deleteOnShutdown:false} 2022-08-23 11:22:44,789 INFO com.dtstack.chunjun.connector.mysql.sink.MysqlOutputFormat [] - initTimingSubmitTask() ,initialDelay:10000, delay:10000, MILLISECONDS 2022-08-23 11:22:45,247 INFO com.dtstack.chunjun.connector.jdbc.sink.JdbcOutputFormat [] - write sql:INSERT INTO one_copy1(id, name, age, phone) VALUES (:id, :name, :age, :phone) 2022-08-23 11:22:45,273 INFO com.dtstack.chunjun.connector.mysql.source.MysqlInputFormat [] - Executing sql is: 'SELECT id, name, age, phone FROM one WHERE 1=1 ORDER BY id ASC' 2022-08-23 11:22:45,309 INFO com.dtstack.chunjun.connector.jdbc.sink.JdbcOutputFormat [] - subTask[0}] wait finished 2022-08-23 11:22:45,367 INFO com.dtstack.chunjun.connector.mysql.sink.MysqlOutputFormat [] - [MysqlOutputFormat] open successfully, checkpointMode = AT_LEAST_ONCE, checkpointEnabled = false, flushIntervalMills = 10000, batchSize = 1024, [JdbcConf]: { "semantic" : "at-least-once", "errorRecord" : 0, "checkFormat" : true, "parallelism" : 1, "executeDdlAble" : false, "pollingInterval" : 5000, "increment" : false, "flushIntervalMills" : 10000, "polling" : false, "mode" : "insert", "password" : "**", "metricPluginRoot" : "/opt/flinkx/chunjun/metrics", "restoreColumnIndex" : -1, "connection" : [ { "table" : [ "one_copy1" ], "jdbcUrl" : "jdbc:mysql://172.18.8.77:3306/zk_test", "allReplace" : false } ], "table" : "one_copy1", "queryTimeOut" : 0, "fetchSize" : 0, "useMaxFunc" : false, "column" : [ { "name" : "id", "type" : "INT", "index" : 0, "notNull" : false, "part" : false }, { "name" : "name", "type" : "VARCHAR", "index" : 1, "notNull" : false, "part" : false }, { "name" : "age", "type" : "TINYINT", "index" : 2, "notNull" : false, "part" : false }, { "name" : "phone", "type" : "VARCHAR", "index" : 3, "notNull" : false, "part" : false } ], "errorPercentage" : -1, "fieldNameList" : [ ], "withNoLock" : false, "increColumnIndex" : -1, "allReplace" : false, "initReporter" : true, "jdbcUrl" : "jdbc:mysql://172.18.8.77:3306/zk_test", "connectTimeOut" : 0, "batchSize" : 1024, "speedBytes" : 0, "rowSizeCalculatorType" : "objectSizeCalculator", "metricPluginName" : "prometheus", "properties" : { "user" : "root", "password" : "**", "useCursorFetch" : "true", "rewriteBatchedStatements" : "true" }, "username" : "root" } 2022-08-23 11:22:45,367 INFO com.dtstack.chunjun.connector.mysql.source.MysqlInputFormat [] - [MysqlInputFormat] open successfully, inputSplit = JdbcInputSplit{mod=0, endLocation='null', startLocation='null', startLocationOfSplit='null', endLocationOfSplit='null', isPolling=false, splitStrategy='mod', rangeEndLocationOperator=' < '}GenericSplit (0/1), [JdbcConf]: { "semantic" : "at-least-once", "errorRecord" : 0, "checkFormat" : true, "parallelism" : 1, "executeDdlAble" : false, "pollingInterval" : 5000, "increment" : true, "flushIntervalMills" : 10000, "polling" : false, "querySql" : "SELECT id, name, age, phone FROM one WHERE 1=1 ORDER BY id ASC", "mode" : "INSERT", "password" : "**", "increColumn" : "id", "metricPluginRoot" : "/opt/flinkx/chunjun/metrics", "restoreColumn" : "id", "restoreColumnIndex" : 0, "connection" : [ { "table" : [ "one" ], "jdbcUrl" : [ "jdbc:mysql://172.18.8.77:3306/zk_test" ] } ], "splitPk" : "id", "table" : "one", "queryTimeOut" : 300, "restoreColumnType" : "INT", "fetchSize" : -2147483648, "useMaxFunc" : false, "column" : [ { "name" : "id", "type" : "INT", "index" : 0, "notNull" : false, "part" : false }, { "name" : "name", "type" : "VARCHAR", "index" : 1, "notNull" : false, "part" : false }, { "name" : "age", "type" : "TINYINT", "index" : 2, "notNull" : false, "part" : false }, { "name" : "phone", "type" : "VARCHAR", "index" : 3, "notNull" : false, "part" : false } ], "errorPercentage" : -1, "fieldNameList" : [ ], "withNoLock" : false, "increColumnIndex" : 0, "allReplace" : false, "splitStrategy" : "range", "initReporter" : true, "jdbcUrl" : "jdbc:mysql://172.18.8.77:3306/zk_test", "connectTimeOut" : 600, "batchSize" : 1, "speedBytes" : 0, "rowSizeCalculatorType" : "objectSizeCalculator", "metricPluginName" : "prometheus", "properties" : { "user" : "root", "password" : "**", "useCursorFetch" : "true", "rewriteBatchedStatements" : "true" }, "increColumnType" : "INT", "username" : "root" } 2022-08-23 11:22:45,375 INFO org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate [] - Converting recovered input channels (1 channels) 2022-08-23 11:22:45,383 INFO com.dtstack.chunjun.dirty.log.LogDirtyDataCollector [] - Print consumer closed. 2022-08-23 11:22:45,395 INFO com.dtstack.chunjun.metrics.prometheus.PrometheusReport [] - push metrics to PushGateway with jobName flink-metrics5f1c40630d6cc0ec7b3b23eb13c4d7ff. 2022-08-23 11:22:54,813 WARN com.dtstack.chunjun.connector.jdbc.sink.JdbcOutputFormat [] - write Multiple Records error, start to rollback connection, row size = 7, first row = { "columnList": [ { "data": 1, "byteSize": 0 }, { "format": "yyyy-MM-dd HH:mm:ss", "isCustomFormat": false, "data": "cccc", "byteSize": 0 }, { "data": 18, "byteSize": 0 }, { "format": "yyyy-MM-dd HH:mm:ss", "isCustomFormat": false, "data": "18888888", "byteSize": 0 } ], "extHeader": [], "byteSize": 175, "kind": "INSERT" } java.sql.BatchUpdateException: Duplicate entry '1' for key 'PRIMARY' at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[?:1.8.0_261] at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) ~[?:1.8.0_261] at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[?:1.8.0_261] at java.lang.reflect.Constructor.newInstance(Constructor.java:423) ~[?:1.8.0_261] at com.mysql.jdbc.Util.handleNewInstance(Util.java:425) ~[chunjun-connector-mysql.jar:?] at com.mysql.jdbc.Util.getInstance(Util.java:408) ~[chunjun-connector-mysql.jar:?] at com.mysql.jdbc.SQLError.createBatchUpdateException(SQLError.java:1163) ~[chunjun-connector-mysql.jar:?] at com.mysql.jdbc.PreparedStatement.executeBatchedInserts(PreparedStatement.java:1587) ~[chunjun-connector-mysql.jar:?] at com.mysql.jdbc.PreparedStatement.executeBatchInternal(PreparedStatement.java:1253) ~[chunjun-connector-mysql.jar:?] at com.mysql.jdbc.StatementImpl.executeBatch(StatementImpl.java:970) ~[chunjun-connector-mysql.jar:?] at com.dtstack.chunjun.connector.jdbc.statement.FieldNamedPreparedStatementImpl.executeBatch(FieldNamedPreparedStatementImpl.java:131) ~[chunjun-connector-mysql.jar:?] at com.dtstack.chunjun.connector.jdbc.sink.PreparedStmtProxy.executeBatch(PreparedStmtProxy.java:251) ~[chunjun-connector-mysql.jar:?] at com.dtstack.chunjun.connector.jdbc.sink.JdbcOutputFormat.writeMultipleRecordsInternal(JdbcOutputFormat.java:222) ~[chunjun-connector-mysql.jar:?] at com.dtstack.chunjun.sink.format.BaseRichOutputFormat.writeRecordInternal(BaseRichOutputFormat.java:483) ~[blob_p-1597a926ed8f6241f6b0ddcf32e546702099a14e-58531459bb6093b5258726dc49a3237d:?] at com.dtstack.chunjun.sink.format.BaseRichOutputFormat.lambda$initTimingSubmitTask$0(BaseRichOutputFormat.java:443) ~[blob_p-1597a926ed8f6241f6b0ddcf32e546702099a14e-58531459bb6093b5258726dc49a3237d:?] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_261] at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [?:1.8.0_261] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [?:1.8.0_261] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [?:1.8.0_261] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_261] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_261] at java.lang.Thread.run(Thread.java:748) [?:1.8.0_261] Caused by: com.mysql.jdbc.exceptions.jdbc4.MySQLIntegrityConstraintViolationException: Duplicate entry '1' for key 'PRIMARY' at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[?:1.8.0_261] at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) ~[?:1.8.0_261] at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[?:1.8.0_261] at java.lang.reflect.Constructor.newInstance(Constructor.java:423) ~[?:1.8.0_261] at com.mysql.jdbc.Util.handleNewInstance(Util.java:425) ~[chunjun-connector-mysql.jar:?] at com.mysql.jdbc.Util.getInstance(Util.java:408) ~[chunjun-connector-mysql.jar:?] at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:936) ~[chunjun-connector-mysql.jar:?] at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3976) ~[chunjun-connector-mysql.jar:?] at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3912) ~[chunjun-connector-mysql.jar:?] at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2530) ~[chunjun-connector-mysql.jar:?] at com.mysql.jdbc.ServerPreparedStatement.serverExecute(ServerPreparedStatement.java:1283) ~[chunjun-connector-mysql.jar:?] at com.mysql.jdbc.ServerPreparedStatement.executeInternal(ServerPreparedStatement.java:783) ~[chunjun-connector-mysql.jar:?] at com.mysql.jdbc.PreparedStatement.executeUpdateInternal(PreparedStatement.java:2079) ~[chunjun-connector-mysql.jar:?] at com.mysql.jdbc.PreparedStatement.executeUpdateInternal(PreparedStatement.java:2013) ~[chunjun-connector-mysql.jar:?] at com.mysql.jdbc.PreparedStatement.executeLargeUpdate(PreparedStatement.java:5104) ~[chunjun-connector-mysql.jar:?] at com.mysql.jdbc.PreparedStatement.executeBatchedInserts(PreparedStatement.java:1548) ~[chunjun-connector-mysql.jar:?] ... 14 more 2022-08-23 11:22:54,851 ERROR com.dtstack.chunjun.connector.mysql.sink.MysqlOutputFormat [] - Writing records failed. com.dtstack.chunjun.throwable.NoRestartException: The dirty consumer shutdown, due to the consumed count exceed the max-consumed [0] at com.dtstack.chunjun.dirty.consumer.DirtyDataCollector.addConsumed(DirtyDataCollector.java:105) at com.dtstack.chunjun.dirty.consumer.DirtyDataCollector.offer(DirtyDataCollector.java:79) at com.dtstack.chunjun.dirty.manager.DirtyManager.collect(DirtyManager.java:140) at com.dtstack.chunjun.sink.format.BaseRichOutputFormat.writeSingleRecord(BaseRichOutputFormat.java:469) at com.dtstack.chunjun.sink.format.BaseRichOutputFormat.lambda$writeRecordInternal$1(BaseRichOutputFormat.java:487) at java.util.ArrayList.forEach(ArrayList.java:1259) at com.dtstack.chunjun.sink.format.BaseRichOutputFormat.writeRecordInternal(BaseRichOutputFormat.java:487) at com.dtstack.chunjun.sink.format.BaseRichOutputFormat.lambda$initTimingSubmitTask$0(BaseRichOutputFormat.java:443) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748)

2022-08-23 11:22:54,852 WARN com.dtstack.chunjun.dirty.log.LogDirtyDataCollector [] - ====================Dirty Data===================== DirtyDataEntry[jobId='62179b92f6a8bf69378e6a7e872335d2', jobName='666', operatorName='Sink: mysqlsinkfactory', dirtyContent='{"extHeader":[],"byteSize":175,"string":"(1,cccc,18,18888888)","headers":null,"rowKind":"INSERT","headerInfo":null,"arity":4}', errorMessage='com.dtstack.chunjun.throwable.WriteRecordException: JdbcOutputFormat [666] writeRecord error: when converting field[0] in Row(+I(1,cccc,18,18888888)) com.mysql.jdbc.exceptions.jdbc4.MySQLIntegrityConstraintViolationException: Duplicate entry '1' for key 'PRIMARY' at com.dtstack.chunjun.connector.jdbc.sink.JdbcOutputFormat.processWriteException(JdbcOutputFormat.java:384) at com.dtstack.chunjun.connector.jdbc.sink.JdbcOutputFormat.writeSingleRecordInternal(JdbcOutputFormat.java:199) at com.dtstack.chunjun.sink.format.BaseRichOutputFormat.writeSingleRecord(BaseRichOutputFormat.java:466) at com.dtstack.chunjun.sink.format.BaseRichOutputFormat.lambda$writeRecordInternal$1(BaseRichOutputFormat.java:487) at java.util.ArrayList.forEach(ArrayList.java:1259) at com.dtstack.chunjun.sink.format.BaseRichOutputFormat.writeRecordInternal(BaseRichOutputFormat.java:487) at com.dtstack.chunjun.sink.format.BaseRichOutputFormat.lambda$initTimingSubmitTask$0(BaseRichOutputFormat.java:443) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: com.mysql.jdbc.exceptions.jdbc4.MySQLIntegrityConstraintViolationException: Duplicate entry '1' for key 'PRIMARY' at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at com.mysql.jdbc.Util.handleNewInstance(Util.java:425) at com.mysql.jdbc.Util.getInstance(Util.java:408) at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:936) at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3976) at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3912) at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2530) at com.mysql.jdbc.ServerPreparedStatement.serverExecute(ServerPreparedStatement.java:1283) at com.mysql.jdbc.ServerPreparedStatement.executeInternal(ServerPreparedStatement.java:783) at com.mysql.jdbc.PreparedStatement.execute(PreparedStatement.java:1197) at com.dtstack.chunjun.connector.jdbc.statement.FieldNamedPreparedStatementImpl.execute(FieldNamedPreparedStatementImpl.java:141) at com.dtstack.chunjun.connector.jdbc.sink.PreparedStmtProxy.writeSingleRecordInternal(PreparedStmtProxy.java:205) at com.dtstack.chunjun.connector.jdbc.sink.JdbcOutputFormat.writeSingleRecordInternal(JdbcOutputFormat.java:196) ... 12 more ', fieldName='null', createTime=2022-08-23 11:22:54.819]

=================================================== 2022-08-23 11:23:05,395 INFO com.dtstack.chunjun.connector.mysql.source.MysqlInputFormat [] - subtask input close finished 2022-08-23 11:23:05,403 INFO org.apache.flink.runtime.taskmanager.Task [] - Source: mysqlsourcefactory (1/1)#0 (dbb488235bfc9269c1948b00ee8e7188) switched from RUNNING to FINISHED. 2022-08-23 11:23:05,403 INFO org.apache.flink.runtime.taskmanager.Task [] - Freeing task resources for Source: mysqlsourcefactory (1/1)#0 (dbb488235bfc9269c1948b00ee8e7188). 2022-08-23 11:23:05,404 INFO org.apache.flink.runtime.taskexecutor.TaskExecutor [] - Un-registering task and sending final execution state FINISHED to JobManager for task Source: mysqlsourcefactory (1/1)#0 dbb488235bfc9269c1948b00ee8e7188. 2022-08-23 11:23:05,404 INFO com.dtstack.chunjun.connector.mysql.sink.MysqlOutputFormat [] - taskNumber[0] close()

Paddy0523 commented 2 years ago

Error caused by duplicate primary keys

image