Open biandou1313 opened 2 years ago
2022-08-23 11:22:17,512 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - -------------------------------------------------------------------------------- 2022-08-23 11:22:17,516 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - Preconfiguration: 2022-08-23 11:22:17,516 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] -
TM_RESOURCE_PARAMS extraction logs: jvm_params: -Xmx8294655496 -Xms8294655496 -XX:MaxDirectMemorySize=1207959552 -XX:MaxMetaspaceSize=268435456 dynamic_configs: -D taskmanager.memory.framework.off-heap.size=134217728b -D taskmanager.memory.network.max=1073741824b -D taskmanager.memory.network.min=1073741824b -D taskmanager.memory.framework.heap.size=134217728b -D taskmanager.memory.managed.size=6335076856b -D taskmanager.cpu.cores=50.0 -D taskmanager.memory.task.heap.size=8160437768b -D taskmanager.memory.task.off-heap.size=0b -D taskmanager.memory.jvm-metaspace.size=268435456b -D taskmanager.memory.jvm-overhead.max=1073741824b -D taskmanager.memory.jvm-overhead.min=1073741824b logs: INFO [] - Loading configuration property: jobmanager.rpc.address, hadoop02 INFO [] - Loading configuration property: taskmanager.numberOfTaskSlots, 50 INFO [] - Loading configuration property: web.submit.enable, true INFO [] - Loading configuration property: jobmanager.rpc.port, 6123 INFO [] - Loading configuration property: jobmanager.memory.process.size, 2g INFO [] - Loading configuration property: taskmanager.memory.process.size, 16g INFO [] - Loading configuration property: parallelism.default, 5 INFO [] - Loading configuration property: jobmanager.execution.failover-strategy, region WARN [] - Error while trying to split key and value in configuration file /opt/flink/flink-1.12.2/conf/flink-conf.yaml:13: "env.java.home:/usr/java/jdk1.8.0_261" INFO [] - Loading configuration property: jobmanager.archive.fs.dir, hdfs://hadoop02:8020/flink/completed-jobs/ INFO [] - Loading configuration property: historyserver.web.address, 0.0.0.0 INFO [] - Loading configuration property: historyserver.archive.fs.refresh-interval, 10000 INFO [] - Loading configuration property: historyserver.web.port, 8082 INFO [] - Loading configuration property: historyserver.archive.fs.dir, hdfs://hadoop02:8020/flink/completed-jobs/ INFO [] - Loading configuration property: state.backend, filesystem INFO [] - Loading configuration property: state.backend.fs.checkpointdir, hdfs://hadoop02:8020/flink-checkpoints INFO [] - Loading configuration property: high-availability, zookeeper INFO [] - Loading configuration property: high-availability.storageDir, hdfs://hadoop02:8020/flink/ha/ INFO [] - Loading configuration property: high-availability.zookeeper.quorum, hadoop01:2181,hadoop02:2181,hadoop03:2181 INFO [] - Loading configuration property: metrics.reporter.promgateway.class, org.apache.flink.metrics.prometheus.PrometheusPushGatewayReporter INFO [] - Loading configuration property: metrics.reporter.promgateway.host, 172.18.8.211 INFO [] - Loading configuration property: metrics.reporter.promgateway.port, 9091 INFO [] - Loading configuration property: metrics.reporter.promgateway.jobName, flink-metrics INFO [] - Loading configuration property: metrics.reporter.promgateway.randomJobNameSuffix, true INFO [] - Loading configuration property: metrics.reporter.promgateway.deleteOnShutdown, false INFO [] - Loading configuration property: metrics.reporter.promgateway.interval, 30 SECONDS INFO [] - The derived from fraction jvm overhead memory (1.600gb (1717986944 bytes)) is greater than its max value 1024.000mb (1073741824 bytes), max value will be used instead INFO [] - The derived from fraction network memory (1.475gb (1583769214 bytes)) is greater than its max value 1024.000mb (1073741824 bytes), max value will be used instead INFO [] - Final TaskExecutor Memory configuration: INFO [] - Total Process Memory: 16.000gb (17179869184 bytes) INFO [] - Total Flink Memory: 14.750gb (15837691904 bytes) INFO [] - Total JVM Heap Memory: 7.725gb (8294655496 bytes) INFO [] - Framework: 128.000mb (134217728 bytes) INFO [] - Task: 7.600gb (8160437768 bytes) INFO [] - Total Off-heap Memory: 7.025gb (7543036408 bytes) INFO [] - Managed: 5.900gb (6335076856 bytes) INFO [] - Total JVM Direct Memory: 1.125gb (1207959552 bytes) INFO [] - Framework: 128.000mb (134217728 bytes) INFO [] - Task: 0 bytes INFO [] - Network: 1024.000mb (1073741824 bytes) INFO [] - JVM Metaspace: 256.000mb (268435456 bytes) INFO [] - JVM Overhead: 1024.000mb (1073741824 bytes)
2022-08-23 11:22:17,516 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - --------------------------------------------------------------------------------
2022-08-23 11:22:17,516 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - Starting TaskManager (Version: 1.12.7, Scala: 2.12, Rev:88d9950, Date:2021-12-14T23:39:33+01:00)
2022-08-23 11:22:17,516 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - OS current user: hadoop
2022-08-23 11:22:17,715 WARN org.apache.hadoop.util.NativeCodeLoader [] - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2022-08-23 11:22:17,764 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - Current Hadoop/Kerberos user: hadoop
2022-08-23 11:22:17,765 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - JVM: Java HotSpot(TM) 64-Bit Server VM - Oracle Corporation - 1.8/25.261-b12
2022-08-23 11:22:17,765 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - Maximum heap size: 7912 MiBytes
2022-08-23 11:22:17,765 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - JAVA_HOME: /usr/java/jdk1.8.0_261
2022-08-23 11:22:17,766 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - Hadoop version: 2.8.3
2022-08-23 11:22:17,766 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - JVM Options:
2022-08-23 11:22:17,766 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - -XX:+UseG1GC
2022-08-23 11:22:17,766 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - -Xmx8294655496
2022-08-23 11:22:17,766 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - -Xms8294655496
2022-08-23 11:22:17,766 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - -XX:MaxDirectMemorySize=1207959552
2022-08-23 11:22:17,766 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - -XX:MaxMetaspaceSize=268435456
2022-08-23 11:22:17,766 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - -Dlog.file=/opt/flink/flink-1.12.2/log/flink-hadoop-taskexecutor-1-hadoop03.log
2022-08-23 11:22:17,766 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - -Dlog4j.configuration=file:/opt/flink/flink-1.12.2/conf/log4j.properties
2022-08-23 11:22:17,766 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - -Dlog4j.configurationFile=file:/opt/flink/flink-1.12.2/conf/log4j.properties
2022-08-23 11:22:17,766 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - -Dlogback.configurationFile=file:/opt/flink/flink-1.12.2/conf/logback.xml
2022-08-23 11:22:17,767 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - Program Arguments:
2022-08-23 11:22:17,768 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - --configDir
2022-08-23 11:22:17,768 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - /opt/flink/flink-1.12.2/conf
2022-08-23 11:22:17,768 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - -D
2022-08-23 11:22:17,768 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - taskmanager.memory.framework.off-heap.size=134217728b
2022-08-23 11:22:17,768 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - -D
2022-08-23 11:22:17,768 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - taskmanager.memory.network.max=1073741824b
2022-08-23 11:22:17,768 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - -D
2022-08-23 11:22:17,768 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - taskmanager.memory.network.min=1073741824b
2022-08-23 11:22:17,768 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - -D
2022-08-23 11:22:17,768 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - taskmanager.memory.framework.heap.size=134217728b
2022-08-23 11:22:17,768 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - -D
2022-08-23 11:22:17,768 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - taskmanager.memory.managed.size=6335076856b
2022-08-23 11:22:17,768 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - -D
2022-08-23 11:22:17,768 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - taskmanager.cpu.cores=50.0
2022-08-23 11:22:17,768 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - -D
2022-08-23 11:22:17,768 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - taskmanager.memory.task.heap.size=8160437768b
2022-08-23 11:22:17,768 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - -D
2022-08-23 11:22:17,768 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - taskmanager.memory.task.off-heap.size=0b
2022-08-23 11:22:17,768 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - -D
2022-08-23 11:22:17,768 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - taskmanager.memory.jvm-metaspace.size=268435456b
2022-08-23 11:22:17,768 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - -D
2022-08-23 11:22:17,769 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - taskmanager.memory.jvm-overhead.max=1073741824b
2022-08-23 11:22:17,769 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - -D
2022-08-23 11:22:17,769 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - taskmanager.memory.jvm-overhead.min=1073741824b
2022-08-23 11:22:17,769 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - Classpath: /opt/flink/flink-1.12.2/lib/flink-csv-1.12.7.jar:/opt/flink/flink-1.12.2/lib/flink-json-1.12.7.jar:/opt/flink/flink-1.12.2/lib/flink-metrics-prometheus-1.12.2.jar:/opt/flink/flink-1.12.2/lib/flink-metrics-prometheus-1.12.7.jar:/opt/flink/flink-1.12.2/lib/flink-shaded-hadoop-2-uber-2.8.3-10.0.jar:/opt/flink/flink-1.12.2/lib/flink-shaded-zookeeper-3.4.14.jar:/opt/flink/flink-1.12.2/lib/flink-table_2.12-1.12.7.jar:/opt/flink/flink-1.12.2/lib/flink-table-blink_2.12-1.12.7.jar:/opt/flink/flink-1.12.2/lib/log4j-1.2-api-2.16.0.jar:/opt/flink/flink-1.12.2/lib/log4j-api-2.16.0.jar:/opt/flink/flink-1.12.2/lib/log4j-core-2.16.0.jar:/opt/flink/flink-1.12.2/lib/log4j-slf4j-impl-2.16.0.jar:/opt/flink/flink-1.12.2/lib/flink-dist_2.12-1.12.7.jar:/opt/hadoop/hadoop-2.7.4/etc/hadoop:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/commons-compress-1.4.1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/commons-cli-1.2.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/jettison-1.1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/curator-framework-2.7.1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/commons-digester-1.8.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/httpclient-4.2.5.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/jersey-server-1.9.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/mockito-all-1.8.5.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/commons-httpclient-3.1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/jersey-core-1.9.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/xmlenc-0.52.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/jersey-json-1.9.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/curator-client-2.7.1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/avro-1.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/commons-net-3.1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/log4j-1.2.17.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/gson-2.2.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/hamcrest-core-1.3.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/commons-io-2.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/commons-configuration-1.6.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/activation-1.1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/jets3t-0.9.0.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/jetty-util-6.1.26.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/commons-collections-3.2.2.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/zookeeper-3.4.6.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/jsch-0.1.54.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/commons-math3-3.1.1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/servlet-api-2.5.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/commons-logging-1.1.3.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/jsr305-3.0.0.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/xz-1.0.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/jetty-sslengine-6.1.26.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/guava-11.0.2.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/httpcore-4.2.5.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/hadoop-auth-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/junit-4.11.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/paranamer-2.3.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/netty-3.6.2.Final.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/jsp-api-2.1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/asm-3.2.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/stax-api-1.0-2.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/commons-codec-1.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/hadoop-annotations-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/jetty-6.1.26.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/htrace-core-3.1.0-incubating.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/commons-lang-2.6.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/hadoop-nfs-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/hadoop-common-2.7.4-tests.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/hadoop-common-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/commons-io-2.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/guava-11.0.2.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/asm-3.2.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/htrace-core-3.1.0-incubating.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/hadoop-hdfs-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/hadoop-hdfs-2.7.4-tests.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/hadoop-hdfs-nfs-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/guice-3.0.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/commons-cli-1.2.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/jettison-1.1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/jersey-server-1.9.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/jersey-core-1.9.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/jersey-json-1.9.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/log4j-1.2.17.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/commons-io-2.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/activation-1.1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/javax.inject-1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/jersey-client-1.9.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/servlet-api-2.5.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/xz-1.0.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/guava-11.0.2.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/asm-3.2.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/aopalliance-1.0.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/commons-codec-1.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/jetty-6.1.26.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/commons-lang-2.6.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-client-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-common-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-api-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-tests-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-common-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-registry-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/guice-3.0.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/javax.inject-1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/xz-1.0.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/junit-4.11.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/asm-3.2.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/hadoop-annotations-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.4-tests.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/contrib/capacity-scheduler/*.jar:/opt/hadoop/hadoop-2.7.4/etc/hadoop:
2022-08-23 11:22:17,769 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - --------------------------------------------------------------------------------
2022-08-23 11:22:17,770 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - Registered UNIX signal handlers for [TERM, HUP, INT]
2022-08-23 11:22:17,772 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - Maximum number of open file descriptors is 1000000.
2022-08-23 11:22:17,778 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: jobmanager.rpc.address, hadoop02
2022-08-23 11:22:17,779 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: taskmanager.numberOfTaskSlots, 50
2022-08-23 11:22:17,779 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: web.submit.enable, true
2022-08-23 11:22:17,779 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: jobmanager.rpc.port, 6123
2022-08-23 11:22:17,779 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: jobmanager.memory.process.size, 2g
2022-08-23 11:22:17,779 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: taskmanager.memory.process.size, 16g
2022-08-23 11:22:17,779 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: parallelism.default, 5
2022-08-23 11:22:17,779 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: jobmanager.execution.failover-strategy, region
2022-08-23 11:22:17,779 WARN org.apache.flink.configuration.GlobalConfiguration [] - Error while trying to split key and value in configuration file /opt/flink/flink-1.12.2/conf/flink-conf.yaml:13: "env.java.home:/usr/java/jdk1.8.0_261"
2022-08-23 11:22:17,779 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: jobmanager.archive.fs.dir, hdfs://hadoop02:8020/flink/completed-jobs/
2022-08-23 11:22:17,779 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: historyserver.web.address, 0.0.0.0
2022-08-23 11:22:17,779 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: historyserver.archive.fs.refresh-interval, 10000
2022-08-23 11:22:17,779 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: historyserver.web.port, 8082
2022-08-23 11:22:17,780 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: historyserver.archive.fs.dir, hdfs://hadoop02:8020/flink/completed-jobs/
2022-08-23 11:22:17,780 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: state.backend, filesystem
2022-08-23 11:22:17,780 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: state.backend.fs.checkpointdir, hdfs://hadoop02:8020/flink-checkpoints
2022-08-23 11:22:17,780 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: high-availability, zookeeper
2022-08-23 11:22:17,780 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: high-availability.storageDir, hdfs://hadoop02:8020/flink/ha/
2022-08-23 11:22:17,780 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: high-availability.zookeeper.quorum, hadoop01:2181,hadoop02:2181,hadoop03:2181
2022-08-23 11:22:17,780 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: metrics.reporter.promgateway.class, org.apache.flink.metrics.prometheus.PrometheusPushGatewayReporter
2022-08-23 11:22:17,780 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: metrics.reporter.promgateway.host, 172.18.8.211
2022-08-23 11:22:17,780 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: metrics.reporter.promgateway.port, 9091
2022-08-23 11:22:17,780 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: metrics.reporter.promgateway.jobName, flink-metrics
2022-08-23 11:22:17,780 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: metrics.reporter.promgateway.randomJobNameSuffix, true
2022-08-23 11:22:17,780 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: metrics.reporter.promgateway.deleteOnShutdown, false
2022-08-23 11:22:17,780 INFO org.apache.flink.configuration.GlobalConfiguration [] - Loading configuration property: metrics.reporter.promgateway.interval, 30 SECONDS
2022-08-23 11:22:17,884 INFO org.apache.flink.runtime.security.modules.HadoopModule [] - Hadoop user set to hadoop (auth:SIMPLE)
2022-08-23 11:22:17,894 INFO org.apache.flink.runtime.security.modules.JaasModule [] - Jaas file will be created as /tmp/jaas-7284308405517075844.conf.
2022-08-23 11:22:18,301 INFO org.apache.flink.runtime.blob.FileSystemBlobStore [] - Creating highly available BLOB storage directory at hdfs://hadoop02:8020/flink/ha/default/blob
2022-08-23 11:22:18,345 INFO org.apache.flink.runtime.util.ZooKeeperUtils [] - Enforcing default ACL for ZK connections
2022-08-23 11:22:18,345 INFO org.apache.flink.runtime.util.ZooKeeperUtils [] - Using '/flink/default' as Zookeeper namespace.
2022-08-23 11:22:18,372 INFO org.apache.flink.shaded.curator4.org.apache.curator.utils.Compatibility [] - Running in ZooKeeper 3.4.x compatibility mode
2022-08-23 11:22:18,372 INFO org.apache.flink.shaded.curator4.org.apache.curator.utils.Compatibility [] - Using emulated InjectSessionExpiration
2022-08-23 11:22:18,389 INFO org.apache.flink.shaded.curator4.org.apache.curator.framework.imps.CuratorFrameworkImpl [] - Starting
2022-08-23 11:22:18,394 INFO org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.ZooKeeper [] - Client environment:zookeeper.version=3.4.14-4c25d480e66aadd371de8bd2fd8da255ac140bcf, built on 03/06/2019 16:18 GMT
2022-08-23 11:22:18,395 INFO org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.ZooKeeper [] - Client environment:host.name=hadoop03
2022-08-23 11:22:18,395 INFO org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.ZooKeeper [] - Client environment:java.version=1.8.0_261
2022-08-23 11:22:18,395 INFO org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.ZooKeeper [] - Client environment:java.vendor=Oracle Corporation
2022-08-23 11:22:18,395 INFO org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.ZooKeeper [] - Client environment:java.home=/usr/java/jdk1.8.0_261/jre
2022-08-23 11:22:18,395 INFO org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.ZooKeeper [] - Client environment:java.class.path=/opt/flink/flink-1.12.2/lib/flink-csv-1.12.7.jar:/opt/flink/flink-1.12.2/lib/flink-json-1.12.7.jar:/opt/flink/flink-1.12.2/lib/flink-metrics-prometheus-1.12.2.jar:/opt/flink/flink-1.12.2/lib/flink-metrics-prometheus-1.12.7.jar:/opt/flink/flink-1.12.2/lib/flink-shaded-hadoop-2-uber-2.8.3-10.0.jar:/opt/flink/flink-1.12.2/lib/flink-shaded-zookeeper-3.4.14.jar:/opt/flink/flink-1.12.2/lib/flink-table_2.12-1.12.7.jar:/opt/flink/flink-1.12.2/lib/flink-table-blink_2.12-1.12.7.jar:/opt/flink/flink-1.12.2/lib/log4j-1.2-api-2.16.0.jar:/opt/flink/flink-1.12.2/lib/log4j-api-2.16.0.jar:/opt/flink/flink-1.12.2/lib/log4j-core-2.16.0.jar:/opt/flink/flink-1.12.2/lib/log4j-slf4j-impl-2.16.0.jar:/opt/flink/flink-1.12.2/lib/flink-dist_2.12-1.12.7.jar:/opt/hadoop/hadoop-2.7.4/etc/hadoop:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/commons-compress-1.4.1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/commons-cli-1.2.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/jettison-1.1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/curator-framework-2.7.1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/commons-digester-1.8.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/httpclient-4.2.5.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/jersey-server-1.9.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/mockito-all-1.8.5.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/commons-httpclient-3.1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/jersey-core-1.9.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/xmlenc-0.52.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/jersey-json-1.9.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/curator-client-2.7.1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/avro-1.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/commons-net-3.1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/log4j-1.2.17.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/gson-2.2.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/hamcrest-core-1.3.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/commons-io-2.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/commons-configuration-1.6.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/activation-1.1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/jets3t-0.9.0.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/jetty-util-6.1.26.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/commons-collections-3.2.2.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/zookeeper-3.4.6.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/jsch-0.1.54.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/commons-math3-3.1.1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/servlet-api-2.5.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/commons-logging-1.1.3.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/jsr305-3.0.0.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/xz-1.0.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/jetty-sslengine-6.1.26.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/guava-11.0.2.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/httpcore-4.2.5.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/hadoop-auth-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/junit-4.11.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/paranamer-2.3.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/netty-3.6.2.Final.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/jsp-api-2.1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/asm-3.2.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/stax-api-1.0-2.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/commons-codec-1.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/hadoop-annotations-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/jetty-6.1.26.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/htrace-core-3.1.0-incubating.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/lib/commons-lang-2.6.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/hadoop-nfs-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/hadoop-common-2.7.4-tests.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/common/hadoop-common-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/commons-io-2.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/guava-11.0.2.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/asm-3.2.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/htrace-core-3.1.0-incubating.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/hadoop-hdfs-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/hadoop-hdfs-2.7.4-tests.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/hdfs/hadoop-hdfs-nfs-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/guice-3.0.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/commons-cli-1.2.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/jettison-1.1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/jersey-server-1.9.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/jersey-core-1.9.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/jersey-json-1.9.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/log4j-1.2.17.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/commons-io-2.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/activation-1.1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/javax.inject-1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/jersey-client-1.9.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/servlet-api-2.5.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/xz-1.0.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/guava-11.0.2.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/asm-3.2.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/aopalliance-1.0.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/commons-codec-1.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/jetty-6.1.26.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/lib/commons-lang-2.6.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-client-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-common-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-api-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-tests-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-common-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-registry-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/guice-3.0.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/javax.inject-1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/xz-1.0.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/junit-4.11.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/asm-3.2.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/lib/hadoop-annotations-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.4-tests.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.7.4.jar:/opt/hadoop/hadoop-2.7.4/contrib/capacity-scheduler/*.jar:/opt/hadoop/hadoop-2.7.4/etc/hadoop:
2022-08-23 11:22:18,395 INFO org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.ZooKeeper [] - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2022-08-23 11:22:18,395 INFO org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.ZooKeeper [] - Client environment:java.io.tmpdir=/tmp
2022-08-23 11:22:18,395 INFO org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.ZooKeeper [] - Client environment:java.compiler=one_copy1
(id
, name
, age
, phone
) VALUES (:id, :name, :age, :phone)
2022-08-23 11:22:45,273 INFO com.dtstack.chunjun.connector.mysql.source.MysqlInputFormat [] - Executing sql is: 'SELECT id
, name
, age
, phone
FROM one
WHERE 1=1 ORDER BY id
ASC'
2022-08-23 11:22:45,309 INFO com.dtstack.chunjun.connector.jdbc.sink.JdbcOutputFormat [] - subTask[0}] wait finished
2022-08-23 11:22:45,367 INFO com.dtstack.chunjun.connector.mysql.sink.MysqlOutputFormat [] - [MysqlOutputFormat] open successfully,
checkpointMode = AT_LEAST_ONCE,
checkpointEnabled = false,
flushIntervalMills = 10000,
batchSize = 1024,
[JdbcConf]:
{
"semantic" : "at-least-once",
"errorRecord" : 0,
"checkFormat" : true,
"parallelism" : 1,
"executeDdlAble" : false,
"pollingInterval" : 5000,
"increment" : false,
"flushIntervalMills" : 10000,
"polling" : false,
"mode" : "insert",
"password" : "**",
"metricPluginRoot" : "/opt/flinkx/chunjun/metrics",
"restoreColumnIndex" : -1,
"connection" : [ {
"table" : [ "one_copy1" ],
"jdbcUrl" : "jdbc:mysql://172.18.8.77:3306/zk_test",
"allReplace" : false
} ],
"table" : "one_copy1",
"queryTimeOut" : 0,
"fetchSize" : 0,
"useMaxFunc" : false,
"column" : [ {
"name" : "id",
"type" : "INT",
"index" : 0,
"notNull" : false,
"part" : false
}, {
"name" : "name",
"type" : "VARCHAR",
"index" : 1,
"notNull" : false,
"part" : false
}, {
"name" : "age",
"type" : "TINYINT",
"index" : 2,
"notNull" : false,
"part" : false
}, {
"name" : "phone",
"type" : "VARCHAR",
"index" : 3,
"notNull" : false,
"part" : false
} ],
"errorPercentage" : -1,
"fieldNameList" : [ ],
"withNoLock" : false,
"increColumnIndex" : -1,
"allReplace" : false,
"initReporter" : true,
"jdbcUrl" : "jdbc:mysql://172.18.8.77:3306/zk_test",
"connectTimeOut" : 0,
"batchSize" : 1024,
"speedBytes" : 0,
"rowSizeCalculatorType" : "objectSizeCalculator",
"metricPluginName" : "prometheus",
"properties" : {
"user" : "root",
"password" : "**",
"useCursorFetch" : "true",
"rewriteBatchedStatements" : "true"
},
"username" : "root"
}
2022-08-23 11:22:45,367 INFO com.dtstack.chunjun.connector.mysql.source.MysqlInputFormat [] - [MysqlInputFormat] open successfully,
inputSplit = JdbcInputSplit{mod=0, endLocation='null', startLocation='null', startLocationOfSplit='null', endLocationOfSplit='null', isPolling=false, splitStrategy='mod', rangeEndLocationOperator=' < '}GenericSplit (0/1),
[JdbcConf]:
{
"semantic" : "at-least-once",
"errorRecord" : 0,
"checkFormat" : true,
"parallelism" : 1,
"executeDdlAble" : false,
"pollingInterval" : 5000,
"increment" : true,
"flushIntervalMills" : 10000,
"polling" : false,
"querySql" : "SELECT id
, name
, age
, phone
FROM one
WHERE 1=1 ORDER BY id
ASC",
"mode" : "INSERT",
"password" : "**",
"increColumn" : "id",
"metricPluginRoot" : "/opt/flinkx/chunjun/metrics",
"restoreColumn" : "id",
"restoreColumnIndex" : 0,
"connection" : [ {
"table" : [ "one" ],
"jdbcUrl" : [ "jdbc:mysql://172.18.8.77:3306/zk_test" ]
} ],
"splitPk" : "id",
"table" : "one",
"queryTimeOut" : 300,
"restoreColumnType" : "INT",
"fetchSize" : -2147483648,
"useMaxFunc" : false,
"column" : [ {
"name" : "id",
"type" : "INT",
"index" : 0,
"notNull" : false,
"part" : false
}, {
"name" : "name",
"type" : "VARCHAR",
"index" : 1,
"notNull" : false,
"part" : false
}, {
"name" : "age",
"type" : "TINYINT",
"index" : 2,
"notNull" : false,
"part" : false
}, {
"name" : "phone",
"type" : "VARCHAR",
"index" : 3,
"notNull" : false,
"part" : false
} ],
"errorPercentage" : -1,
"fieldNameList" : [ ],
"withNoLock" : false,
"increColumnIndex" : 0,
"allReplace" : false,
"splitStrategy" : "range",
"initReporter" : true,
"jdbcUrl" : "jdbc:mysql://172.18.8.77:3306/zk_test",
"connectTimeOut" : 600,
"batchSize" : 1,
"speedBytes" : 0,
"rowSizeCalculatorType" : "objectSizeCalculator",
"metricPluginName" : "prometheus",
"properties" : {
"user" : "root",
"password" : "**",
"useCursorFetch" : "true",
"rewriteBatchedStatements" : "true"
},
"increColumnType" : "INT",
"username" : "root"
}
2022-08-23 11:22:45,375 INFO org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate [] - Converting recovered input channels (1 channels)
2022-08-23 11:22:45,383 INFO com.dtstack.chunjun.dirty.log.LogDirtyDataCollector [] - Print consumer closed.
2022-08-23 11:22:45,395 INFO com.dtstack.chunjun.metrics.prometheus.PrometheusReport [] - push metrics to PushGateway with jobName flink-metrics5f1c40630d6cc0ec7b3b23eb13c4d7ff.
2022-08-23 11:22:54,813 WARN com.dtstack.chunjun.connector.jdbc.sink.JdbcOutputFormat [] - write Multiple Records error, start to rollback connection, row size = 7, first row = {
"columnList": [
{
"data": 1,
"byteSize": 0
},
{
"format": "yyyy-MM-dd HH:mm:ss",
"isCustomFormat": false,
"data": "cccc",
"byteSize": 0
},
{
"data": 18,
"byteSize": 0
},
{
"format": "yyyy-MM-dd HH:mm:ss",
"isCustomFormat": false,
"data": "18888888",
"byteSize": 0
}
],
"extHeader": [],
"byteSize": 175,
"kind": "INSERT"
}
java.sql.BatchUpdateException: Duplicate entry '1' for key 'PRIMARY'
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[?:1.8.0_261]
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) ~[?:1.8.0_261]
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[?:1.8.0_261]
at java.lang.reflect.Constructor.newInstance(Constructor.java:423) ~[?:1.8.0_261]
at com.mysql.jdbc.Util.handleNewInstance(Util.java:425) ~[chunjun-connector-mysql.jar:?]
at com.mysql.jdbc.Util.getInstance(Util.java:408) ~[chunjun-connector-mysql.jar:?]
at com.mysql.jdbc.SQLError.createBatchUpdateException(SQLError.java:1163) ~[chunjun-connector-mysql.jar:?]
at com.mysql.jdbc.PreparedStatement.executeBatchedInserts(PreparedStatement.java:1587) ~[chunjun-connector-mysql.jar:?]
at com.mysql.jdbc.PreparedStatement.executeBatchInternal(PreparedStatement.java:1253) ~[chunjun-connector-mysql.jar:?]
at com.mysql.jdbc.StatementImpl.executeBatch(StatementImpl.java:970) ~[chunjun-connector-mysql.jar:?]
at com.dtstack.chunjun.connector.jdbc.statement.FieldNamedPreparedStatementImpl.executeBatch(FieldNamedPreparedStatementImpl.java:131) ~[chunjun-connector-mysql.jar:?]
at com.dtstack.chunjun.connector.jdbc.sink.PreparedStmtProxy.executeBatch(PreparedStmtProxy.java:251) ~[chunjun-connector-mysql.jar:?]
at com.dtstack.chunjun.connector.jdbc.sink.JdbcOutputFormat.writeMultipleRecordsInternal(JdbcOutputFormat.java:222) ~[chunjun-connector-mysql.jar:?]
at com.dtstack.chunjun.sink.format.BaseRichOutputFormat.writeRecordInternal(BaseRichOutputFormat.java:483) ~[blob_p-1597a926ed8f6241f6b0ddcf32e546702099a14e-58531459bb6093b5258726dc49a3237d:?]
at com.dtstack.chunjun.sink.format.BaseRichOutputFormat.lambda$initTimingSubmitTask$0(BaseRichOutputFormat.java:443) ~[blob_p-1597a926ed8f6241f6b0ddcf32e546702099a14e-58531459bb6093b5258726dc49a3237d:?]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_261]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [?:1.8.0_261]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [?:1.8.0_261]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [?:1.8.0_261]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_261]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_261]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_261]
Caused by: com.mysql.jdbc.exceptions.jdbc4.MySQLIntegrityConstraintViolationException: Duplicate entry '1' for key 'PRIMARY'
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[?:1.8.0_261]
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) ~[?:1.8.0_261]
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[?:1.8.0_261]
at java.lang.reflect.Constructor.newInstance(Constructor.java:423) ~[?:1.8.0_261]
at com.mysql.jdbc.Util.handleNewInstance(Util.java:425) ~[chunjun-connector-mysql.jar:?]
at com.mysql.jdbc.Util.getInstance(Util.java:408) ~[chunjun-connector-mysql.jar:?]
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:936) ~[chunjun-connector-mysql.jar:?]
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3976) ~[chunjun-connector-mysql.jar:?]
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3912) ~[chunjun-connector-mysql.jar:?]
at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2530) ~[chunjun-connector-mysql.jar:?]
at com.mysql.jdbc.ServerPreparedStatement.serverExecute(ServerPreparedStatement.java:1283) ~[chunjun-connector-mysql.jar:?]
at com.mysql.jdbc.ServerPreparedStatement.executeInternal(ServerPreparedStatement.java:783) ~[chunjun-connector-mysql.jar:?]
at com.mysql.jdbc.PreparedStatement.executeUpdateInternal(PreparedStatement.java:2079) ~[chunjun-connector-mysql.jar:?]
at com.mysql.jdbc.PreparedStatement.executeUpdateInternal(PreparedStatement.java:2013) ~[chunjun-connector-mysql.jar:?]
at com.mysql.jdbc.PreparedStatement.executeLargeUpdate(PreparedStatement.java:5104) ~[chunjun-connector-mysql.jar:?]
at com.mysql.jdbc.PreparedStatement.executeBatchedInserts(PreparedStatement.java:1548) ~[chunjun-connector-mysql.jar:?]
... 14 more
2022-08-23 11:22:54,851 ERROR com.dtstack.chunjun.connector.mysql.sink.MysqlOutputFormat [] - Writing records failed. com.dtstack.chunjun.throwable.NoRestartException: The dirty consumer shutdown, due to the consumed count exceed the max-consumed [0]
at com.dtstack.chunjun.dirty.consumer.DirtyDataCollector.addConsumed(DirtyDataCollector.java:105)
at com.dtstack.chunjun.dirty.consumer.DirtyDataCollector.offer(DirtyDataCollector.java:79)
at com.dtstack.chunjun.dirty.manager.DirtyManager.collect(DirtyManager.java:140)
at com.dtstack.chunjun.sink.format.BaseRichOutputFormat.writeSingleRecord(BaseRichOutputFormat.java:469)
at com.dtstack.chunjun.sink.format.BaseRichOutputFormat.lambda$writeRecordInternal$1(BaseRichOutputFormat.java:487)
at java.util.ArrayList.forEach(ArrayList.java:1259)
at com.dtstack.chunjun.sink.format.BaseRichOutputFormat.writeRecordInternal(BaseRichOutputFormat.java:487)
at com.dtstack.chunjun.sink.format.BaseRichOutputFormat.lambda$initTimingSubmitTask$0(BaseRichOutputFormat.java:443)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2022-08-23 11:22:54,852 WARN com.dtstack.chunjun.dirty.log.LogDirtyDataCollector [] - ====================Dirty Data===================== DirtyDataEntry[jobId='62179b92f6a8bf69378e6a7e872335d2', jobName='666', operatorName='Sink: mysqlsinkfactory', dirtyContent='{"extHeader":[],"byteSize":175,"string":"(1,cccc,18,18888888)","headers":null,"rowKind":"INSERT","headerInfo":null,"arity":4}', errorMessage='com.dtstack.chunjun.throwable.WriteRecordException: JdbcOutputFormat [666] writeRecord error: when converting field[0] in Row(+I(1,cccc,18,18888888)) com.mysql.jdbc.exceptions.jdbc4.MySQLIntegrityConstraintViolationException: Duplicate entry '1' for key 'PRIMARY' at com.dtstack.chunjun.connector.jdbc.sink.JdbcOutputFormat.processWriteException(JdbcOutputFormat.java:384) at com.dtstack.chunjun.connector.jdbc.sink.JdbcOutputFormat.writeSingleRecordInternal(JdbcOutputFormat.java:199) at com.dtstack.chunjun.sink.format.BaseRichOutputFormat.writeSingleRecord(BaseRichOutputFormat.java:466) at com.dtstack.chunjun.sink.format.BaseRichOutputFormat.lambda$writeRecordInternal$1(BaseRichOutputFormat.java:487) at java.util.ArrayList.forEach(ArrayList.java:1259) at com.dtstack.chunjun.sink.format.BaseRichOutputFormat.writeRecordInternal(BaseRichOutputFormat.java:487) at com.dtstack.chunjun.sink.format.BaseRichOutputFormat.lambda$initTimingSubmitTask$0(BaseRichOutputFormat.java:443) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: com.mysql.jdbc.exceptions.jdbc4.MySQLIntegrityConstraintViolationException: Duplicate entry '1' for key 'PRIMARY' at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at com.mysql.jdbc.Util.handleNewInstance(Util.java:425) at com.mysql.jdbc.Util.getInstance(Util.java:408) at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:936) at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3976) at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3912) at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2530) at com.mysql.jdbc.ServerPreparedStatement.serverExecute(ServerPreparedStatement.java:1283) at com.mysql.jdbc.ServerPreparedStatement.executeInternal(ServerPreparedStatement.java:783) at com.mysql.jdbc.PreparedStatement.execute(PreparedStatement.java:1197) at com.dtstack.chunjun.connector.jdbc.statement.FieldNamedPreparedStatementImpl.execute(FieldNamedPreparedStatementImpl.java:141) at com.dtstack.chunjun.connector.jdbc.sink.PreparedStmtProxy.writeSingleRecordInternal(PreparedStmtProxy.java:205) at com.dtstack.chunjun.connector.jdbc.sink.JdbcOutputFormat.writeSingleRecordInternal(JdbcOutputFormat.java:196) ... 12 more ', fieldName='null', createTime=2022-08-23 11:22:54.819]
=================================================== 2022-08-23 11:23:05,395 INFO com.dtstack.chunjun.connector.mysql.source.MysqlInputFormat [] - subtask input close finished 2022-08-23 11:23:05,403 INFO org.apache.flink.runtime.taskmanager.Task [] - Source: mysqlsourcefactory (1/1)#0 (dbb488235bfc9269c1948b00ee8e7188) switched from RUNNING to FINISHED. 2022-08-23 11:23:05,403 INFO org.apache.flink.runtime.taskmanager.Task [] - Freeing task resources for Source: mysqlsourcefactory (1/1)#0 (dbb488235bfc9269c1948b00ee8e7188). 2022-08-23 11:23:05,404 INFO org.apache.flink.runtime.taskexecutor.TaskExecutor [] - Un-registering task and sending final execution state FINISHED to JobManager for task Source: mysqlsourcefactory (1/1)#0 dbb488235bfc9269c1948b00ee8e7188. 2022-08-23 11:23:05,404 INFO com.dtstack.chunjun.connector.mysql.sink.MysqlOutputFormat [] - taskNumber[0] close()
Error caused by duplicate primary keys
Search before asking
What happened
问题一:insert 模式下 主键重复 任务直接终止 问题二:insert模式下 配置脏数据存储 也没有用 任务终止 ,张数据表也无数据
What you expected to happen
1、insert模式下 mysql-mysql 主键重复直接统计到 统计指标中 2、脏数据入库 且不会终止任务
How to reproduce
1、执行jSON { "job": { "content": [ { "reader": { "parameter": { "password": "123456", "dataSourceId": 8, "column": [ { "precision": 4, "name": "id", "columnDisplaySize": 4, "type": "INT" }, { "precision": 255, "name": "name", "columnDisplaySize": 255, "type": "VARCHAR" }, { "precision": 4, "name": "age", "columnDisplaySize": 4, "type": "TINYINT" }, { "precision": 255, "name": "phone", "columnDisplaySize": 255, "type": "VARCHAR" } ], "connection": [ { "jdbcUrl": [ "jdbc:mysql://172.18.8.77:3306/zk_test" ], "table": [ "one" ] } ], "increColumn":"id", "splitPk": "id", "username": "root" }, "name": "mysqlreader" }, "writer": { "parameter": { "password": "123456", "dataSourceId": 8, "column": [ { "precision": 4, "name": "id", "columnDisplaySize": 4, "type": "INT" }, { "precision": 255, "name": "name", "columnDisplaySize": 255, "type": "VARCHAR" }, { "precision": 4, "name": "age", "columnDisplaySize": 4, "type": "TINYINT" }, { "precision": 255, "name": "phone", "columnDisplaySize": 255, "type": "VARCHAR" } ], "connection": [ { "jdbcUrl": "jdbc:mysql://172.18.8.77:3306/zk_test", "table": [ "one_copy1" ] } ], "writeMode": "insert", "username": "root" }, "name": "mysqlwriter" } } ], "setting": { "log": { "isLogger": false }, "errorLimit": {}, "speed": { "bytes": 0, "channel": 1 } } } }
2、脏数据配置 {\"start-chunjun.dirty-data.output-type\":\"jdbc\",\"start-chunjun.dirty-data.max-rows\":\"1000\",\"start-chunjun.dirty-data.max-collect-failed-rows\":\"100\",\"start-chunjun.dirty-data.jdbc.url\":\"jdbc:mysql://172.18.8.77:3306/zk_test\",\"start-chunjun.dirty-data.jdbc.username\":\"root\",\"start-chunjun.dirty-data.jdbc.password\":\"123456\",\"start-chunjun.dirty-data.jdbc.table\":\"chunjun_dirty_data\",\"start-chunjun.dirty-data.jdbc.batch-size\":\"10\",\"start-chunjun.dirty-data.log.print-interval\":\"10\"}
Anything else
No response
Version
master
Are you willing to submit PR?
Code of Conduct