DTStack / chunjun

A data integration framework
https://dtstack.github.io/chunjun/
Apache License 2.0
3.98k stars 1.69k forks source link

[Question] mysql2hive Required field 'client_protocol' is unset #1461

Open psvmc opened 1 year ago

psvmc commented 1 year ago

Search before asking

Description

Required field 'client_protocol' is unset

环境中的hive版本是2.1.0
chunjun使用的版本是1.12 使用github上打包的不行 本地源代码打包上传的也不行

Code of Conduct

FlechazoW commented 1 year ago

hive 是什么版本,apache hive2.1 吗?还是spark thrift server 呢?任务脚本及日志提供下,谢谢

psvmc commented 1 year ago

Listening for transport dt_socket at address: 5005 2023-01-05 20:19:46,210 - 0 INFO [main] org.apache.flink.runtime.util.EnvironmentInformation:-------------------------------------------------------------------------------- 2023-01-05 20:19:46,211 - 1 INFO [main] org.apache.flink.runtime.util.EnvironmentInformation: Starting YARN TaskExecutor runner (Version: 1.12.7, Scala: 2.12, Rev:88d9950, Date:2021-12-14T23:39:33+01:00) 2023-01-05 20:19:46,211 - 1 INFO [main] org.apache.flink.runtime.util.EnvironmentInformation: OS current user: root 2023-01-05 20:19:46,712 - 502 INFO [main] org.apache.flink.runtime.util.EnvironmentInformation: Current Hadoop/Kerberos user: root 2023-01-05 20:19:46,713 - 503 INFO [main] org.apache.flink.runtime.util.EnvironmentInformation: JVM: Java HotSpot(TM) 64-Bit Server VM - Oracle Corporation - 1.8/25.221-b11 2023-01-05 20:19:46,713 - 503 INFO [main] org.apache.flink.runtime.util.EnvironmentInformation: Maximum heap size: 491 MiBytes 2023-01-05 20:19:46,713 - 503 INFO [main] org.apache.flink.runtime.util.EnvironmentInformation: JAVA_HOME: /usr/java/jdk1.8.0_221-amd64 2023-01-05 20:19:46,716 - 506 INFO [main] org.apache.flink.runtime.util.EnvironmentInformation: Hadoop version: 3.1.4 2023-01-05 20:19:46,716 - 506 INFO [main] org.apache.flink.runtime.util.EnvironmentInformation: JVM Options: 2023-01-05 20:19:46,716 - 506 INFO [main] org.apache.flink.runtime.util.EnvironmentInformation: -Xmx536870902 2023-01-05 20:19:46,716 - 506 INFO [main] org.apache.flink.runtime.util.EnvironmentInformation: -Xms536870902 2023-01-05 20:19:46,716 - 506 INFO [main] org.apache.flink.runtime.util.EnvironmentInformation: -XX:MaxDirectMemorySize=268435458 2023-01-05 20:19:46,716 - 506 INFO [main] org.apache.flink.runtime.util.EnvironmentInformation: -XX:MaxMetaspaceSize=268435456 2023-01-05 20:19:46,717 - 507 INFO [main] org.apache.flink.runtime.util.EnvironmentInformation: -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5005 2023-01-05 20:19:46,717 - 507 INFO [main] org.apache.flink.runtime.util.EnvironmentInformation: -Dlog.file=/data/tools/bigdata/hadoop-2.7.7/logs/userlogs/application_1672710362889_0056/container_e19_1672710362889_0056_01_000014/taskmanager.log 2023-01-05 20:19:46,717 - 507 INFO [main] org.apache.flink.runtime.util.EnvironmentInformation: -Dlogback.configurationFile=file:./logback.xml 2023-01-05 20:19:46,717 - 507 INFO [main] org.apache.flink.runtime.util.EnvironmentInformation: Program Arguments: 2023-01-05 20:19:46,718 - 508 INFO [main] org.apache.flink.runtime.util.EnvironmentInformation: -D 2023-01-05 20:19:46,718 - 508 INFO [main] org.apache.flink.runtime.util.EnvironmentInformation: taskmanager.memory.framework.off-heap.size=134217728b 2023-01-05 20:19:46,718 - 508 INFO [main] org.apache.flink.runtime.util.EnvironmentInformation: -D 2023-01-05 20:19:46,718 - 508 INFO [main] org.apache.flink.runtime.util.EnvironmentInformation: taskmanager.memory.network.max=134217730b 2023-01-05 20:19:46,718 - 508 INFO [main] org.apache.flink.runtime.util.EnvironmentInformation: -D 2023-01-05 20:19:46,719 - 509 INFO [main] org.apache.flink.runtime.util.EnvironmentInformation: taskmanager.memory.network.min=134217730b 2023-01-05 20:19:46,719 - 509 INFO [main] org.apache.flink.runtime.util.EnvironmentInformation: -D 2023-01-05 20:19:46,719 - 509 INFO [main] org.apache.flink.runtime.util.EnvironmentInformation: taskmanager.memory.framework.heap.size=134217728b 2023-01-05 20:19:46,719 - 509 INFO [main] org.apache.flink.runtime.util.EnvironmentInformation: -D 2023-01-05 20:19:46,719 - 509 INFO [main] org.apache.flink.runtime.util.EnvironmentInformation: taskmanager.memory.managed.size=536870920b 2023-01-05 20:19:46,719 - 509 INFO [main] org.apache.flink.runtime.util.EnvironmentInformation: -D 2023-01-05 20:19:46,719 - 509 INFO [main] org.apache.flink.runtime.util.EnvironmentInformation: taskmanager.cpu.cores=4.0 2023-01-05 20:19:46,719 - 509 INFO [main] org.apache.flink.runtime.util.EnvironmentInformation: -D 2023-01-05 20:19:46,719 - 509 INFO [main] org.apache.flink.runtime.util.EnvironmentInformation: taskmanager.memory.task.heap.size=402653174b 2023-01-05 20:19:46,719 - 509 INFO [main] org.apache.flink.runtime.util.EnvironmentInformation: -D 2023-01-05 20:19:46,719 - 509 INFO [main] org.apache.flink.runtime.util.EnvironmentInformation: taskmanager.memory.task.off-heap.size=0b 2023-01-05 20:19:46,719 - 509 INFO [main] org.apache.flink.runtime.util.EnvironmentInformation: -D 2023-01-05 20:19:46,720 - 510 INFO [main] org.apache.flink.runtime.util.EnvironmentInformation: taskmanager.memory.jvm-metaspace.size=268435456b 2023-01-05 20:19:46,720 - 510 INFO [main] org.apache.flink.runtime.util.EnvironmentInformation: -D 2023-01-05 20:19:46,720 - 510 INFO [main] org.apache.flink.runtime.util.EnvironmentInformation: taskmanager.memory.jvm-overhead.max=201326592b 2023-01-05 20:19:46,720 - 510 INFO [main] org.apache.flink.runtime.util.EnvironmentInformation: -D 2023-01-05 20:19:46,720 - 510 INFO [main] org.apache.flink.runtime.util.EnvironmentInformation: taskmanager.memory.jvm-overhead.min=201326592b 2023-01-05 20:19:46,720 - 510 INFO [main] org.apache.flink.runtime.util.EnvironmentInformation: --configDir 2023-01-05 20:19:46,720 - 510 INFO [main] org.apache.flink.runtime.util.EnvironmentInformation: . 2023-01-05 20:19:46,720 - 510 INFO [main] org.apache.flink.runtime.util.EnvironmentInformation: -Djobmanager.memory.jvm-overhead.min=201326592b 2023-01-05 20:19:46,720 - 510 INFO [main] org.apache.flink.runtime.util.EnvironmentInformation: -Dtaskmanager.resource-id=container_e19_1672710362889_0056_01_000014 2023-01-05 20:19:46,720 - 510 INFO [main] org.apache.flink.runtime.util.EnvironmentInformation: -Dweb.port=0 2023-01-05 20:19:46,720 - 510 INFO [main] org.apache.flink.runtime.util.EnvironmentInformation: -Djobmanager.memory.off-heap.size=134217728b 2023-01-05 20:19:46,720 - 510 INFO [main] org.apache.flink.runtime.util.EnvironmentInformation: -Dweb.tmpdir=/tmp/flink-web-dc0b6da1-85c9-46de-a81b-3d2520471ee2 2023-01-05 20:19:46,720 - 510 INFO [main] org.apache.flink.runtime.util.EnvironmentInformation: -Dinternal.taskmanager.resource-id.metadata=hadoop03:37894 2023-01-05 20:19:46,721 - 511 INFO [main] org.apache.flink.runtime.util.EnvironmentInformation: -Djobmanager.rpc.port=37521 2023-01-05 20:19:46,721 - 511 INFO [main] org.apache.flink.runtime.util.EnvironmentInformation: -Drest.address=hadoop01 2023-01-05 20:19:46,721 - 511 INFO [main] org.apache.flink.runtime.util.EnvironmentInformation: -Djobmanager.memory.jvm-metaspace.size=268435456b 2023-01-05 20:19:46,721 - 511 INFO [main] org.apache.flink.runtime.util.EnvironmentInformation: -Djobmanager.memory.heap.size=1073741824b 2023-01-05 20:19:46,721 - 511 INFO [main] org.apache.flink.runtime.util.EnvironmentInformation: -Djobmanager.memory.jvm-overhead.max=201326592b 2023-01-05 20:19:46,721 - 511 INFO [main] org.apache.flink.runtime.util.EnvironmentInformation: Classpath: :chunjun/bin:chunjun/chunjun-dist/chunjun-core.jar:chunjun/chunjun-dist/connector/binlog/chunjun-connector-binlog.jar:chunjun/chunjun-dist/connector/cassandra/chunjun-connector-cassandra.jar:chunjun/chunjun-dist/connector/clickhouse/chunjun-connector-clickhouse.jar:chunjun/chunjun-dist/connector/db2/chunjun-connector-db2.jar:chunjun/chunjun-dist/connector/dm/chunjun-connector-dm.jar:chunjun/chunjun-dist/connector/doris/chunjun-connector-doris.jar:chunjun/chunjun-dist/connector/elasticsearch7/chunjun-connector-elasticsearch7.jar:chunjun/chunjun-dist/connector/emqx/chunjun-connector-emqx.jar:chunjun/chunjun-dist/connector/file/chunjun-connector-file.jar:chunjun/chunjun-dist/connector/filesystem/chunjun-connector-filesystem.jar:chunjun/chunjun-dist/connector/ftp/chunjun-connector-ftp.jar:chunjun/chunjun-dist/connector/greenplum/chunjun-connector-greenplum.jar:chunjun/chunjun-dist/connector/hbase14/chunjun-connector-hbase-1.4.jar:chunjun/chunjun-dist/connector/hbase2/chunjun-connector-hbase2.jar:chunjun/chunjun-dist/connector/hdfs/chunjun-connector-hdfs.jar:chunjun/chunjun-dist/connector/hive/chunjun-connector-hive.jar:chunjun/chunjun-dist/connector/hive3/chunjun-connector-hive3.jar:chunjun/chunjun-dist/connector/http/chunjun-connector-http.jar:chunjun/chunjun-dist/connector/iceberg/chunjun-connector-iceberg.jar:chunjun/chunjun-dist/connector/influxdb/chunjun-connector-influxdb.jar:chunjun/chunjun-dist/connector/kafka/chunjun-connector-kafka.jar:chunjun/chunjun-dist/connector/kudu/chunjun-connector-kudu.jar:chunjun/chunjun-dist/connector/mongodb/chunjun-connector-mongodb.jar:chunjun/chunjun-dist/connector/mysql/chunjun-connector-mysql.jar:chunjun/chunjun-dist/connector/mysqlcdc/chunjun-connector-mysqlcdc.jar:chunjun/chunjun-dist/connector/mysqld/chunjun-connector-mysqld.jar:chunjun/chunjun-dist/connector/oceanbase/chunjun-connector-oceanbase.jar:chunjun/chunjun-dist/connector/oceanbasecdc/chunjun-connector-oceanbasecdc.jar:chunjun/chunjun-dist/connector/oracle/chunjun-connector-oracle.jar:chunjun/chunjun-dist/connector/oraclelogminer/chunjun-connector-oraclelogminer.jar:chunjun/chunjun-dist/connector/pgwal/chunjun-connector-pgwal.jar:chunjun/chunjun-dist/connector/postgresql/chunjun-connector-postgresql.jar:chunjun/chunjun-dist/connector/rabbitmq/chunjun-connector-rabbitmq.jar:chunjun/chunjun-dist/connector/redis/chunjun-connector-redis.jar:chunjun/chunjun-dist/connector/rocketmq/chunjun-connector-rocketmq.jar:chunjun/chunjun-dist/connector/s3/chunjun-connector-s3.jar:chunjun/chunjun-dist/connector/saphana/chunjun-connector-saphana.jar:chunjun/chunjun-dist/connector/socket/chunjun-connector-socket.jar:chunjun/chunjun-dist/connector/solr/chunjun-connector-solr.jar:chunjun/chunjun-dist/connector/sqlserver/chunjun-connector-sqlserver.jar:chunjun/chunjun-dist/connector/sqlservercdc/chunjun-connector-sqlservercdc.jar:chunjun/chunjun-dist/connector/starrocks/chunjun-connector-starrocks.jar:chunjun/chunjun-dist/connector/stream/chunjun-connector-stream.jar:chunjun/chunjun-dist/connector/sybase/chunjun-connector-sybase.jar:chunjun/chunjun-dist/connector/upsert-kafka/chunjun-connector-kafka.jar:chunjun/chunjun-dist/connector/vertica11/chunjun-connector-vertica11.jar:chunjun/chunjun-dist/ddl-plugins/mysql/chunjun-ddl-mysql.jar:chunjun/chunjun-dist/ddl-plugins/oracle/chunjun-ddl-oracle.jar:chunjun/chunjun-dist/dirty-data-collector/log/chunjun-dirty-log.jar:chunjun/chunjun-dist/dirty-data-collector/mysql/chunjun-dirty-mysql.jar:chunjun/chunjun-dist/docker-build:chunjun/chunjun-dist/formats/pbformat/chunjun-protobuf.jar:chunjun/chunjun-dist/metrics/mysql/chunjun-metrics-mysql.jar:chunjun/chunjun-dist/metrics/prometheus/chunjun-metrics-prometheus.jar:chunjun/chunjun-dist/restore-plugins/mysql/chunjun-restore-mysql.jar:chunjun/chunjun-examples/json/binlog:chunjun/chunjun-examples/json/cassandra:chunjun/chunjun-examples/json/clickhouse:chunjun/chunjun-examples/json/db2:chunjun/chunjun-examples/json/dm:chunjun/chunjun-examples/json/doris:chunjun/chunjun-examples/json/elasticsearch5:chunjun/chunjun-examples/json/elasticsearch6:chunjun/chunjun-examples/json/elasticsearch7:chunjun/chunjun-examples/json/emqx:chunjun/chunjun-examples/json/ftp:chunjun/chunjun-examples/json/gbase:chunjun/chunjun-examples/json/greenplum:chunjun/chunjun-examples/json/hbase:chunjun/chunjun-examples/json/hdfs:chunjun/chunjun-examples/json/hive:chunjun/chunjun-examples/json/hive3:chunjun/chunjun-examples/json/http:chunjun/chunjun-examples/json/iceberg:chunjun/chunjun-examples/json/kafka:chunjun/chunjun-examples/json/kingbase:chunjun/chunjun-examples/json/kudu:chunjun/chunjun-examples/json/logminer:chunjun/chunjun-examples/json/mongodb:chunjun/chunjun-examples/json/mysql:chunjun/chunjun-examples/json/mysqlcdc:chunjun/chunjun-examples/json/mysqld:chunjun/chunjun-examples/json/oceanbasecdc:chunjun/chunjun-examples/json/oracle:chunjun/chunjun-examples/json/oracle/split:chunjun/chunjun-examples/json/pgwal:chunjun/chunjun-examples/json/phoenix5:chunjun/chunjun-examples/json/postgresql:chunjun/chunjun-examples/json/redis:chunjun/chunjun-examples/json/s3:chunjun/chunjun-examples/json/saphana:chunjun/chunjun-examples/json/socket:chunjun/chunjun-examples/json/solr:chunjun/chunjun-examples/json/sqlserver:chunjun/chunjun-examples/json/sqlservercdc:chunjun/chunjun-examples/json/starrocks:chunjun/chunjun-examples/json/stream:chunjun/chunjun-examples/json/sybase:chunjun/chunjun-examples/json/vertica11:chunjun/chunjun-examples/sql/binlog:chunjun/chunjun-examples/sql/cassandra:chunjun/chunjun-examples/sql/clickhouse:chunjun/chunjun-examples/sql/db2:chunjun/chunjun-examples/sql/dm:chunjun/chunjun-examples/sql/doris:chunjun/chunjun-examples/sql/elasticsearch5:chunjun/chunjun-examples/sql/elasticsearch6:chunjun/chunjun-examples/sql/elasticsearch7:chunjun/chunjun-examples/sql/emqx:chunjun/chunjun-examples/sql/file:chunjun/chunjun-examples/sql/filesystem/s3:chunjun/chunjun-examples/sql/ftp:chunjun/chunjun-examples/sql/gbase:chunjun/chunjun-examples/sql/greenplum:chunjun/chunjun-examples/sql/hbase:chunjun/chunjun-examples/sql/hdfs:chunjun/chunjun-examples/sql/hive:chunjun/chunjun-examples/sql/http:chunjun/chunjun-examples/sql/iceberg:chunjun/chunjun-examples/sql/kafka:chunjun/chunjun-examples/sql/kingbase:chunjun/chunjun-examples/sql/kudu:chunjun/chunjun-examples/sql/logminer:chunjun/chunjun-examples/sql/mongo:chunjun/chunjun-examples/sql/mysql:chunjun/chunjun-examples/sql/oceanbasecdc:chunjun/chunjun-examples/sql/oracle:chunjun/chunjun-examples/sql/oracle/increment:chunjun/chunjun-examples/sql/oracle/lookup:chunjun/chunjun-examples/sql/oracle/parallelism:chunjun/chunjun-examples/sql/oracle/polling:chunjun/chunjun-examples/sql/phoenix5:chunjun/chunjun-examples/sql/postgresql:chunjun/chunjun-examples/sql/rabbitmq:chunjun/chunjun-examples/sql/redis:chunjun/chunjun-examples/sql/rocketmq:chunjun/chunjun-examples/sql/s3:chunjun/chunjun-examples/sql/saphana:chunjun/chunjun-examples/sql/solr:chunjun/chunjun-examples/sql/sqlserver:chunjun/chunjun-examples/sql/sqlservercdc:chunjun/chunjun-examples/sql/starrocks:chunjun/chunjun-examples/sql/stream:chunjun/chunjun-examples/sql/vertica11:chunjun/chunjun-examples/sql/vertica11/lookup:chunjun/chunjun-examples/sql/window:chunjun/lib/chunjun-clients.jar:lib/flink-connector-hive_2.12-1.12.7.jar:lib/flink-csv-1.12.7.jar:lib/flink-json-1.12.7.jar:lib/flink-metrics-prometheus-1.12.7.jar:lib/flink-shaded-hadoop-2-uber-2.7.5-10.0.jar:lib/flink-shaded-zookeeper-3.4.14.jar:lib/flink-sql-connector-hive-2.2.0_2.12-1.15.3.jar:lib/flink-table-blink_2.12-1.12.7.jar:lib/flink-table_2.12-1.12.7.jar:lib/hive-exec-2.1.0.jar:lib/log4j-1.2-api-2.16.0.jar:lib/log4j-api-2.16.0.jar:lib/log4j-core-2.16.0.jar:lib/log4j-slf4j-impl-2.16.0.jar:flink-dist_2.12-1.12.7.jar:flink-conf.yaml::/data/tools/bigdata/hadoop-2.7.7/etc/hadoop:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/hadoop-common-2.7.7-tests.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/hadoop-common-2.7.7.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/hadoop-nfs-2.7.7.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/activation-1.1.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/paranamer-2.3.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/log4j-1.2.17.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/servlet-api-2.5.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/asm-3.2.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/avro-1.7.4.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/jettison-1.1.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/commons-cli-1.2.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/commons-codec-1.4.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/jetty-6.1.26.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/commons-collections-3.2.2.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/commons-compress-1.4.1.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/jetty-sslengine-6.1.26.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/commons-configuration-1.6.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/stax-api-1.0-2.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/commons-digester-1.8.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/xmlenc-0.52.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/commons-httpclient-3.1.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/commons-io-2.4.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/commons-lang-2.6.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/xz-1.0.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/commons-logging-1.1.3.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/zookeeper-3.4.6.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/commons-math3-3.1.1.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/commons-net-3.1.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/curator-client-2.7.1.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/curator-framework-2.7.1.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/gson-2.2.4.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/guava-11.0.2.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/hadoop-annotations-2.7.7.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/hadoop-auth-2.7.7.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/hamcrest-core-1.3.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/mockito-all-1.8.5.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/htrace-core-3.1.0-incubating.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/httpclient-4.2.5.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/httpcore-4.2.5.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/netty-3.6.2.Final.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/jersey-core-1.9.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/jersey-json-1.9.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/jersey-server-1.9.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/jets3t-0.9.0.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/jetty-util-6.1.26.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/jsch-0.1.54.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/jsp-api-2.1.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/jsr305-3.0.0.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/junit-4.11.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/hdfs/hadoop-hdfs-2.7.7-tests.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/hdfs/hadoop-hdfs-2.7.7.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/hdfs/hadoop-hdfs-nfs-2.7.7.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/hdfs/lib/asm-3.2.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/hdfs/lib/commons-io-2.4.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/hdfs/lib/guava-11.0.2.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/hdfs/lib/htrace-core-3.1.0-incubating.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/hadoop-yarn-api-2.7.7.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.7.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.7.7.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/hadoop-yarn-client-2.7.7.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/hadoop-yarn-common-2.7.7.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/hadoop-yarn-registry-2.7.7.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.7.7.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/hadoop-yarn-server-common-2.7.7.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.7.7.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.7.7.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.7.7.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/hadoop-yarn-server-tests-2.7.7.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.7.7.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/lib/activation-1.1.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/lib/aopalliance-1.0.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/lib/asm-3.2.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/lib/commons-cli-1.2.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/lib/commons-codec-1.4.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/lib/commons-io-2.4.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/lib/commons-lang-2.6.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/lib/guava-11.0.2.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/lib/guice-3.0.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/lib/javax.inject-1.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/lib/jersey-client-1.9.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/lib/jersey-core-1.9.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/lib/jersey-json-1.9.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/lib/jersey-server-1.9.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/lib/jettison-1.1.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/lib/jetty-6.1.26.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/lib/log4j-1.2.17.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/lib/servlet-api-2.5.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/lib/xz-1.0.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/lib/zookeeper-3.4.6.jar 2023-01-05 20:19:46,722 - 512 INFO [main] org.apache.flink.runtime.util.EnvironmentInformation:-------------------------------------------------------------------------------- 2023-01-05 20:19:46,723 - 513 INFO [main] org.apache.flink.runtime.util.SignalHandler:Registered UNIX signal handlers for [TERM, HUP, INT] 2023-01-05 20:19:46,726 - 516 INFO [main] org.apache.flink.yarn.YarnTaskExecutorRunner:Current working Directory: /data/tools/bigdata/zdata/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1672710362889_0056/container_e19_1672710362889_0056_01_000014 2023-01-05 20:19:46,752 - 542 INFO [main] org.apache.flink.configuration.GlobalConfiguration:Loading configuration property: env.java.opts.jobmanager, "-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5006" 2023-01-05 20:19:46,753 - 543 INFO [main] org.apache.flink.configuration.GlobalConfiguration:Loading configuration property: metrics.reporter.promgateway.port, 9091 2023-01-05 20:19:46,753 - 543 INFO [main] org.apache.flink.configuration.GlobalConfiguration:Loading configuration property: yarn.flink-dist-jar, file:/data/tools/bigdata/flink-1.12.7/lib/flink-dist_2.12-1.12.7.jar 2023-01-05 20:19:46,753 - 543 INFO [main] org.apache.flink.configuration.GlobalConfiguration:Loading configuration property: metrics.reporter.promgateway.interval, 30 SECONDS 2023-01-05 20:19:46,753 - 543 INFO [main] org.apache.flink.configuration.GlobalConfiguration:Loading configuration property: metrics.reporter.promgateway.jobName, flink-metrics-ppg 2023-01-05 20:19:46,753 - 543 INFO [main] org.apache.flink.configuration.GlobalConfiguration:Loading configuration property: high-availability.cluster-id, application_1672710362889_0056 2023-01-05 20:19:46,753 - 543 INFO [main] org.apache.flink.configuration.GlobalConfiguration:Loading configuration property: jobmanager.rpc.address, hadoop01 2023-01-05 20:19:46,753 - 543 INFO [main] org.apache.flink.configuration.GlobalConfiguration:Loading configuration property: state.savepoints.dir, hdfs://hacluster/flink112/flink-savepoints 2023-01-05 20:19:46,753 - 543 INFO [main] org.apache.flink.configuration.GlobalConfiguration:Loading configuration property: high-availability.zookeeper.path.root, /flink112 2023-01-05 20:19:46,754 - 544 INFO [main] org.apache.flink.configuration.GlobalConfiguration:Loading configuration property: metrics.reporter.promgateway.class, org.apache.flink.metrics.prometheus.PrometheusPushGatewayReporter 2023-01-05 20:19:46,754 - 544 INFO [main] org.apache.flink.configuration.GlobalConfiguration:Loading configuration property: high-availability.storageDir, hdfs://hacluster/flink112/ha/ 2023-01-05 20:19:46,754 - 544 INFO [main] org.apache.flink.configuration.GlobalConfiguration:Loading configuration property: parallelism.default, 4 2023-01-05 20:19:46,754 - 544 INFO [main] org.apache.flink.configuration.GlobalConfiguration:Loading configuration property: yarn.application-attempts, 10 2023-01-05 20:19:46,754 - 544 INFO [main] org.apache.flink.configuration.GlobalConfiguration:Loading configuration property: metrics.reporter.promgateway.randomJobNameSuffix, true 2023-01-05 20:19:46,754 - 544 INFO [main] org.apache.flink.configuration.GlobalConfiguration:Loading configuration property: taskmanager.numberOfTaskSlots, 4 2023-01-05 20:19:46,754 - 544 INFO [main] org.apache.flink.configuration.GlobalConfiguration:Loading configuration property: taskmanager.memory.process.size, 1728m 2023-01-05 20:19:46,754 - 544 INFO [main] org.apache.flink.configuration.GlobalConfiguration:Loading configuration property: jobmanager.archive.fs.dir, hdfs://hacluster/flink112/completed-jobs 2023-01-05 20:19:46,754 - 544 INFO [main] org.apache.flink.configuration.GlobalConfiguration:Loading configuration property: state.backend.fs.checkpointdir, hdfs://hacluster/flink112/flink-checkpoints 2023-01-05 20:19:46,755 - 545 INFO [main] org.apache.flink.configuration.GlobalConfiguration:Loading configuration property: execution.target, yarn-session 2023-01-05 20:19:46,755 - 545 INFO [main] org.apache.flink.configuration.GlobalConfiguration:Loading configuration property: jobmanager.memory.process.size, 1600m 2023-01-05 20:19:46,755 - 545 INFO [main] org.apache.flink.configuration.GlobalConfiguration:Loading configuration property: yarn.ship-files, /data/tools/bigdata/taier/chunjun 2023-01-05 20:19:46,755 - 545 INFO [main] org.apache.flink.configuration.GlobalConfiguration:Loading configuration property: jobmanager.rpc.port, 6123 2023-01-05 20:19:46,755 - 545 INFO [main] org.apache.flink.configuration.GlobalConfiguration:Loading configuration property: rest.port, 8081 2023-01-05 20:19:46,755 - 545 INFO [main] org.apache.flink.configuration.GlobalConfiguration:Loading configuration property: metrics.reporter.promgateway.deleteOnShutdown, false 2023-01-05 20:19:46,755 - 545 INFO [main] org.apache.flink.configuration.GlobalConfiguration:Loading configuration property: high-availability.zookeeper.quorum, hadoop01:2181,hadoop02:2181,hadoop03:2181 2023-01-05 20:19:46,755 - 545 INFO [main] org.apache.flink.configuration.GlobalConfiguration:Loading configuration property: execution.attached, false 2023-01-05 20:19:46,756 - 546 INFO [main] org.apache.flink.configuration.GlobalConfiguration:Loading configuration property: internal.cluster.execution-mode, NORMAL 2023-01-05 20:19:46,756 - 546 INFO [main] org.apache.flink.configuration.GlobalConfiguration:Loading configuration property: high-availability, zookeeper 2023-01-05 20:19:46,756 - 546 INFO [main] org.apache.flink.configuration.GlobalConfiguration:Loading configuration property: web.submit.enable, true 2023-01-05 20:19:46,756 - 546 INFO [main] org.apache.flink.configuration.GlobalConfiguration:Loading configuration property: rest.address, 0.0.0.0 2023-01-05 20:19:46,756 - 546 INFO [main] org.apache.flink.configuration.GlobalConfiguration:Loading configuration property: state.backend, filesystem 2023-01-05 20:19:46,756 - 546 INFO [main] org.apache.flink.configuration.GlobalConfiguration:Loading configuration property: high-availability.zookeeper.client.acl, open 2023-01-05 20:19:46,756 - 546 INFO [main] org.apache.flink.configuration.GlobalConfiguration:Loading configuration property: env.java.opts.taskmanager, "-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5005" 2023-01-05 20:19:46,756 - 546 INFO [main] org.apache.flink.configuration.GlobalConfiguration:Loading configuration property: metrics.reporter.promgateway.host, hadoop01 2023-01-05 20:19:46,756 - 546 INFO [main] org.apache.flink.configuration.GlobalConfiguration:Loading configuration property: $internal.deployment.config-dir, /data/tools/bigdata/flink-1.12.7/conf 2023-01-05 20:19:46,757 - 547 INFO [main] org.apache.flink.configuration.GlobalConfiguration:Loading configuration property: $internal.yarn.log-config-file, /data/tools/bigdata/flink-1.12.7/conf/logback.xml 2023-01-05 20:19:46,757 - 547 INFO [main] org.apache.flink.yarn.YarnTaskExecutorRunner:Current working/local Directory: /data/tools/bigdata/zdata/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1672710362889_0056 2023-01-05 20:19:46,773 - 563 INFO [main] org.apache.flink.runtime.clusterframework.BootstrapTools:Setting directories for temporary files to: /data/tools/bigdata/zdata/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1672710362889_0056 2023-01-05 20:19:46,774 - 564 INFO [main] org.apache.flink.yarn.YarnTaskExecutorRunner:TM: local keytab path obtained null 2023-01-05 20:19:46,774 - 564 INFO [main] org.apache.flink.yarn.YarnTaskExecutorRunner:TM: keytab principal obtained null 2023-01-05 20:19:46,779 - 569 INFO [main] org.apache.flink.yarn.YarnTaskExecutorRunner:YARN daemon is running as: root Yarn client user obtainer: root 2023-01-05 20:19:46,916 - 706 INFO [main] org.apache.flink.runtime.security.modules.HadoopModule:Hadoop user set to root (auth:SIMPLE) 2023-01-05 20:19:46,938 - 728 INFO [main] org.apache.flink.runtime.security.modules.JaasModule:Jaas file will be created as /data/tools/bigdata/zdata/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1672710362889_0056/jaas-6605491112292209265.conf. 2023-01-05 20:19:47,962 - 1752 INFO [main] org.apache.flink.runtime.blob.FileSystemBlobStore:Creating highly available BLOB storage directory at hdfs://hacluster/flink112/ha/application_1672710362889_0056/blob 2023-01-05 20:19:48,079 - 1869 INFO [main] org.apache.flink.runtime.util.ZooKeeperUtils:Enforcing default ACL for ZK connections 2023-01-05 20:19:48,079 - 1869 INFO [main] org.apache.flink.runtime.util.ZooKeeperUtils:Using '/flink112/application_1672710362889_0056' as Zookeeper namespace. 2023-01-05 20:19:48,121 - 1911 INFO [main] org.apache.flink.shaded.curator4.org.apache.curator.utils.Compatibility:Running in ZooKeeper 3.4.x compatibility mode 2023-01-05 20:19:48,122 - 1912 INFO [main] org.apache.flink.shaded.curator4.org.apache.curator.utils.Compatibility:Using emulated InjectSessionExpiration 2023-01-05 20:19:48,148 - 1938 INFO [main] org.apache.flink.shaded.curator4.org.apache.curator.framework.imps.CuratorFrameworkImpl:Starting 2023-01-05 20:19:48,155 - 1945 INFO [main] org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.Environment:Client environment:zookeeper.version=3.4.14-4c25d480e66aadd371de8bd2fd8da255ac140bcf, built on 03/06/2019 16:18 GMT 2023-01-05 20:19:48,155 - 1945 INFO [main] org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.Environment:Client environment:host.name=hadoop03 2023-01-05 20:19:48,155 - 1945 INFO [main] org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.Environment:Client environment:java.version=1.8.0_221 2023-01-05 20:19:48,155 - 1945 INFO [main] org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.Environment:Client environment:java.vendor=Oracle Corporation 2023-01-05 20:19:48,155 - 1945 INFO [main] org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.Environment:Client environment:java.home=/usr/java/jdk1.8.0_221-amd64/jre 2023-01-05 20:19:48,155 - 1945 INFO [main] org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.Environment:Client environment:java.class.path=:chunjun/bin:chunjun/chunjun-dist/chunjun-core.jar:chunjun/chunjun-dist/connector/binlog/chunjun-connector-binlog.jar:chunjun/chunjun-dist/connector/cassandra/chunjun-connector-cassandra.jar:chunjun/chunjun-dist/connector/clickhouse/chunjun-connector-clickhouse.jar:chunjun/chunjun-dist/connector/db2/chunjun-connector-db2.jar:chunjun/chunjun-dist/connector/dm/chunjun-connector-dm.jar:chunjun/chunjun-dist/connector/doris/chunjun-connector-doris.jar:chunjun/chunjun-dist/connector/elasticsearch7/chunjun-connector-elasticsearch7.jar:chunjun/chunjun-dist/connector/emqx/chunjun-connector-emqx.jar:chunjun/chunjun-dist/connector/file/chunjun-connector-file.jar:chunjun/chunjun-dist/connector/filesystem/chunjun-connector-filesystem.jar:chunjun/chunjun-dist/connector/ftp/chunjun-connector-ftp.jar:chunjun/chunjun-dist/connector/greenplum/chunjun-connector-greenplum.jar:chunjun/chunjun-dist/connector/hbase14/chunjun-connector-hbase-1.4.jar:chunjun/chunjun-dist/connector/hbase2/chunjun-connector-hbase2.jar:chunjun/chunjun-dist/connector/hdfs/chunjun-connector-hdfs.jar:chunjun/chunjun-dist/connector/hive/chunjun-connector-hive.jar:chunjun/chunjun-dist/connector/hive3/chunjun-connector-hive3.jar:chunjun/chunjun-dist/connector/http/chunjun-connector-http.jar:chunjun/chunjun-dist/connector/iceberg/chunjun-connector-iceberg.jar:chunjun/chunjun-dist/connector/influxdb/chunjun-connector-influxdb.jar:chunjun/chunjun-dist/connector/kafka/chunjun-connector-kafka.jar:chunjun/chunjun-dist/connector/kudu/chunjun-connector-kudu.jar:chunjun/chunjun-dist/connector/mongodb/chunjun-connector-mongodb.jar:chunjun/chunjun-dist/connector/mysql/chunjun-connector-mysql.jar:chunjun/chunjun-dist/connector/mysqlcdc/chunjun-connector-mysqlcdc.jar:chunjun/chunjun-dist/connector/mysqld/chunjun-connector-mysqld.jar:chunjun/chunjun-dist/connector/oceanbase/chunjun-connector-oceanbase.jar:chunjun/chunjun-dist/connector/oceanbasecdc/chunjun-connector-oceanbasecdc.jar:chunjun/chunjun-dist/connector/oracle/chunjun-connector-oracle.jar:chunjun/chunjun-dist/connector/oraclelogminer/chunjun-connector-oraclelogminer.jar:chunjun/chunjun-dist/connector/pgwal/chunjun-connector-pgwal.jar:chunjun/chunjun-dist/connector/postgresql/chunjun-connector-postgresql.jar:chunjun/chunjun-dist/connector/rabbitmq/chunjun-connector-rabbitmq.jar:chunjun/chunjun-dist/connector/redis/chunjun-connector-redis.jar:chunjun/chunjun-dist/connector/rocketmq/chunjun-connector-rocketmq.jar:chunjun/chunjun-dist/connector/s3/chunjun-connector-s3.jar:chunjun/chunjun-dist/connector/saphana/chunjun-connector-saphana.jar:chunjun/chunjun-dist/connector/socket/chunjun-connector-socket.jar:chunjun/chunjun-dist/connector/solr/chunjun-connector-solr.jar:chunjun/chunjun-dist/connector/sqlserver/chunjun-connector-sqlserver.jar:chunjun/chunjun-dist/connector/sqlservercdc/chunjun-connector-sqlservercdc.jar:chunjun/chunjun-dist/connector/starrocks/chunjun-connector-starrocks.jar:chunjun/chunjun-dist/connector/stream/chunjun-connector-stream.jar:chunjun/chunjun-dist/connector/sybase/chunjun-connector-sybase.jar:chunjun/chunjun-dist/connector/upsert-kafka/chunjun-connector-kafka.jar:chunjun/chunjun-dist/connector/vertica11/chunjun-connector-vertica11.jar:chunjun/chunjun-dist/ddl-plugins/mysql/chunjun-ddl-mysql.jar:chunjun/chunjun-dist/ddl-plugins/oracle/chunjun-ddl-oracle.jar:chunjun/chunjun-dist/dirty-data-collector/log/chunjun-dirty-log.jar:chunjun/chunjun-dist/dirty-data-collector/mysql/chunjun-dirty-mysql.jar:chunjun/chunjun-dist/docker-build:chunjun/chunjun-dist/formats/pbformat/chunjun-protobuf.jar:chunjun/chunjun-dist/metrics/mysql/chunjun-metrics-mysql.jar:chunjun/chunjun-dist/metrics/prometheus/chunjun-metrics-prometheus.jar:chunjun/chunjun-dist/restore-plugins/mysql/chunjun-restore-mysql.jar:chunjun/chunjun-examples/json/binlog:chunjun/chunjun-examples/json/cassandra:chunjun/chunjun-examples/json/clickhouse:chunjun/chunjun-examples/json/db2:chunjun/chunjun-examples/json/dm:chunjun/chunjun-examples/json/doris:chunjun/chunjun-examples/json/elasticsearch5:chunjun/chunjun-examples/json/elasticsearch6:chunjun/chunjun-examples/json/elasticsearch7:chunjun/chunjun-examples/json/emqx:chunjun/chunjun-examples/json/ftp:chunjun/chunjun-examples/json/gbase:chunjun/chunjun-examples/json/greenplum:chunjun/chunjun-examples/json/hbase:chunjun/chunjun-examples/json/hdfs:chunjun/chunjun-examples/json/hive:chunjun/chunjun-examples/json/hive3:chunjun/chunjun-examples/json/http:chunjun/chunjun-examples/json/iceberg:chunjun/chunjun-examples/json/kafka:chunjun/chunjun-examples/json/kingbase:chunjun/chunjun-examples/json/kudu:chunjun/chunjun-examples/json/logminer:chunjun/chunjun-examples/json/mongodb:chunjun/chunjun-examples/json/mysql:chunjun/chunjun-examples/json/mysqlcdc:chunjun/chunjun-examples/json/mysqld:chunjun/chunjun-examples/json/oceanbasecdc:chunjun/chunjun-examples/json/oracle:chunjun/chunjun-examples/json/oracle/split:chunjun/chunjun-examples/json/pgwal:chunjun/chunjun-examples/json/phoenix5:chunjun/chunjun-examples/json/postgresql:chunjun/chunjun-examples/json/redis:chunjun/chunjun-examples/json/s3:chunjun/chunjun-examples/json/saphana:chunjun/chunjun-examples/json/socket:chunjun/chunjun-examples/json/solr:chunjun/chunjun-examples/json/sqlserver:chunjun/chunjun-examples/json/sqlservercdc:chunjun/chunjun-examples/json/starrocks:chunjun/chunjun-examples/json/stream:chunjun/chunjun-examples/json/sybase:chunjun/chunjun-examples/json/vertica11:chunjun/chunjun-examples/sql/binlog:chunjun/chunjun-examples/sql/cassandra:chunjun/chunjun-examples/sql/clickhouse:chunjun/chunjun-examples/sql/db2:chunjun/chunjun-examples/sql/dm:chunjun/chunjun-examples/sql/doris:chunjun/chunjun-examples/sql/elasticsearch5:chunjun/chunjun-examples/sql/elasticsearch6:chunjun/chunjun-examples/sql/elasticsearch7:chunjun/chunjun-examples/sql/emqx:chunjun/chunjun-examples/sql/file:chunjun/chunjun-examples/sql/filesystem/s3:chunjun/chunjun-examples/sql/ftp:chunjun/chunjun-examples/sql/gbase:chunjun/chunjun-examples/sql/greenplum:chunjun/chunjun-examples/sql/hbase:chunjun/chunjun-examples/sql/hdfs:chunjun/chunjun-examples/sql/hive:chunjun/chunjun-examples/sql/http:chunjun/chunjun-examples/sql/iceberg:chunjun/chunjun-examples/sql/kafka:chunjun/chunjun-examples/sql/kingbase:chunjun/chunjun-examples/sql/kudu:chunjun/chunjun-examples/sql/logminer:chunjun/chunjun-examples/sql/mongo:chunjun/chunjun-examples/sql/mysql:chunjun/chunjun-examples/sql/oceanbasecdc:chunjun/chunjun-examples/sql/oracle:chunjun/chunjun-examples/sql/oracle/increment:chunjun/chunjun-examples/sql/oracle/lookup:chunjun/chunjun-examples/sql/oracle/parallelism:chunjun/chunjun-examples/sql/oracle/polling:chunjun/chunjun-examples/sql/phoenix5:chunjun/chunjun-examples/sql/postgresql:chunjun/chunjun-examples/sql/rabbitmq:chunjun/chunjun-examples/sql/redis:chunjun/chunjun-examples/sql/rocketmq:chunjun/chunjun-examples/sql/s3:chunjun/chunjun-examples/sql/saphana:chunjun/chunjun-examples/sql/solr:chunjun/chunjun-examples/sql/sqlserver:chunjun/chunjun-examples/sql/sqlservercdc:chunjun/chunjun-examples/sql/starrocks:chunjun/chunjun-examples/sql/stream:chunjun/chunjun-examples/sql/vertica11:chunjun/chunjun-examples/sql/vertica11/lookup:chunjun/chunjun-examples/sql/window:chunjun/lib/chunjun-clients.jar:lib/flink-connector-hive_2.12-1.12.7.jar:lib/flink-csv-1.12.7.jar:lib/flink-json-1.12.7.jar:lib/flink-metrics-prometheus-1.12.7.jar:lib/flink-shaded-hadoop-2-uber-2.7.5-10.0.jar:lib/flink-shaded-zookeeper-3.4.14.jar:lib/flink-sql-connector-hive-2.2.0_2.12-1.15.3.jar:lib/flink-table-blink_2.12-1.12.7.jar:lib/flink-table_2.12-1.12.7.jar:lib/hive-exec-2.1.0.jar:lib/log4j-1.2-api-2.16.0.jar:lib/log4j-api-2.16.0.jar:lib/log4j-core-2.16.0.jar:lib/log4j-slf4j-impl-2.16.0.jar:flink-dist_2.12-1.12.7.jar:flink-conf.yaml::/data/tools/bigdata/hadoop-2.7.7/etc/hadoop:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/hadoop-common-2.7.7-tests.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/hadoop-common-2.7.7.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/hadoop-nfs-2.7.7.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/activation-1.1.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/paranamer-2.3.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/log4j-1.2.17.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/servlet-api-2.5.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/asm-3.2.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/avro-1.7.4.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/jettison-1.1.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/commons-cli-1.2.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/commons-codec-1.4.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/jetty-6.1.26.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/commons-collections-3.2.2.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/commons-compress-1.4.1.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/jetty-sslengine-6.1.26.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/commons-configuration-1.6.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/stax-api-1.0-2.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/commons-digester-1.8.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/xmlenc-0.52.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/commons-httpclient-3.1.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/commons-io-2.4.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/commons-lang-2.6.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/xz-1.0.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/commons-logging-1.1.3.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/zookeeper-3.4.6.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/commons-math3-3.1.1.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/commons-net-3.1.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/curator-client-2.7.1.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/curator-framework-2.7.1.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/gson-2.2.4.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/guava-11.0.2.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/hadoop-annotations-2.7.7.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/hadoop-auth-2.7.7.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/hamcrest-core-1.3.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/mockito-all-1.8.5.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/htrace-core-3.1.0-incubating.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/httpclient-4.2.5.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/httpcore-4.2.5.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/netty-3.6.2.Final.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/jersey-core-1.9.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/jersey-json-1.9.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/jersey-server-1.9.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/jets3t-0.9.0.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/jetty-util-6.1.26.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/jsch-0.1.54.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/jsp-api-2.1.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/jsr305-3.0.0.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/junit-4.11.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/hdfs/hadoop-hdfs-2.7.7-tests.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/hdfs/hadoop-hdfs-2.7.7.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/hdfs/hadoop-hdfs-nfs-2.7.7.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/hdfs/lib/asm-3.2.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/hdfs/lib/commons-io-2.4.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/hdfs/lib/guava-11.0.2.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/hdfs/lib/htrace-core-3.1.0-incubating.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/hadoop-yarn-api-2.7.7.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.7.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.7.7.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/hadoop-yarn-client-2.7.7.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/hadoop-yarn-common-2.7.7.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/hadoop-yarn-registry-2.7.7.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.7.7.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/hadoop-yarn-server-common-2.7.7.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.7.7.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.7.7.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.7.7.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/hadoop-yarn-server-tests-2.7.7.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.7.7.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/lib/activation-1.1.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/lib/aopalliance-1.0.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/lib/asm-3.2.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/lib/commons-cli-1.2.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/lib/commons-codec-1.4.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/lib/commons-io-2.4.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/lib/commons-lang-2.6.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/lib/guava-11.0.2.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/lib/guice-3.0.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/lib/javax.inject-1.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/lib/jersey-client-1.9.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/lib/jersey-core-1.9.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/lib/jersey-json-1.9.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/lib/jersey-server-1.9.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/lib/jettison-1.1.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/lib/jetty-6.1.26.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/lib/log4j-1.2.17.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/lib/servlet-api-2.5.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/lib/xz-1.0.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/yarn/lib/zookeeper-3.4.6.jar 2023-01-05 20:19:48,156 - 1946 INFO [main] org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.Environment:Client environment:java.library.path=:/data/tools/bigdata/hadoop-2.7.7/lib/native:/data/tools/bigdata/hadoop-2.7.7/lib/native:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib 2023-01-05 20:19:48,156 - 1946 INFO [main] org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.Environment:Client environment:java.io.tmpdir=/tmp 2023-01-05 20:19:48,156 - 1946 INFO [main] org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.Environment:Client environment:java.compiler= 2023-01-05 20:19:48,156 - 1946 INFO [main] org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.Environment:Client environment:os.name=Linux 2023-01-05 20:19:48,156 - 1946 INFO [main] org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.Environment:Client environment:os.arch=amd64 2023-01-05 20:19:48,156 - 1946 INFO [main] org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.Environment:Client environment:os.version=3.10.0-1160.71.1.el7.x86_64 2023-01-05 20:19:48,156 - 1946 INFO [main] org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.Environment:Client environment:user.name=root 2023-01-05 20:19:48,156 - 1946 INFO [main] org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.Environment:Client environment:user.home=/root 2023-01-05 20:19:48,156 - 1946 INFO [main] org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.Environment:Client environment:user.dir=/data/tools/bigdata/zdata/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1672710362889_0056/container_e19_1672710362889_0056_01_000014 2023-01-05 20:19:48,157 - 1947 INFO [main] org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.ZooKeeper:Initiating client connection, connectString=hadoop01:2181,hadoop02:2181,hadoop03:2181 sessionTimeout=60000 watcher=org.apache.flink.shaded.curator4.org.apache.curator.ConnectionState@345cf395 2023-01-05 20:19:48,169 - 1959 WARN [main-SendThread(hadoop02:2181)] org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.ClientCnxn$SendThread:SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/data/tools/bigdata/zdata/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1672710362889_0056/jaas-6605491112292209265.conf'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. 2023-01-05 20:19:48,170 - 1960 INFO [main-SendThread(hadoop02:2181)] org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.ClientCnxn$SendThread:Opening socket connection to server hadoop02/192.168.7.102:2181 2023-01-05 20:19:48,170 - 1960 INFO [main-SendThread(hadoop02:2181)] org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.ClientCnxn$SendThread:Socket connection established to hadoop02/192.168.7.102:2181, initiating session 2023-01-05 20:19:48,176 - 1966 INFO [main] org.apache.flink.shaded.curator4.org.apache.curator.framework.imps.CuratorFrameworkImpl:Default schema 2023-01-05 20:19:48,177 - 1967 ERROR [main-EventThread] org.apache.flink.shaded.curator4.org.apache.curator.ConnectionState:Authentication failed 2023-01-05 20:19:48,177 - 1967 INFO [main] org.apache.flink.runtime.taskexecutor.TaskManagerRunner:Using configured hostname/address for TaskManager: hadoop03. 2023-01-05 20:19:48,180 - 1970 INFO [main-SendThread(hadoop02:2181)] org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.ClientCnxn$SendThread:Session establishment complete on server hadoop02/192.168.7.102:2181, sessionid = 0x200000101c9002f, negotiated timeout = 40000 2023-01-05 20:19:48,184 - 1974 INFO [main-EventThread] org.apache.flink.shaded.curator4.org.apache.curator.framework.state.ConnectionStateManager:State change: CONNECTED 2023-01-05 20:19:48,184 - 1974 INFO [main] org.apache.flink.runtime.clusterframework.BootstrapTools:Trying to start actor system, external address hadoop03:0, bind address 0.0.0.0:0. 2023-01-05 20:19:49,042 - 2832 INFO [flink-akka.actor.default-dispatcher-3] akka.event.slf4j.Slf4jLogger$$anonfun$receive$1:Slf4jLogger started 2023-01-05 20:19:49,071 - 2861 INFO [flink-akka.actor.default-dispatcher-5] org.slf4j.helpers.MarkerIgnoringBase:Starting remoting 2023-01-05 20:19:49,259 - 3049 INFO [flink-akka.actor.default-dispatcher-5] org.slf4j.helpers.MarkerIgnoringBase:Remoting started; listening on addresses :[akka.tcp://flink@hadoop03:37980] 2023-01-05 20:19:49,397 - 3187 INFO [main] org.apache.flink.runtime.clusterframework.BootstrapTools:Actor system started at akka.tcp://flink@hadoop03:37980 2023-01-05 20:19:49,439 - 3229 WARN [main] org.apache.flink.runtime.metrics.ReporterSetup:Multiple implementations of the same reporter were found in 'lib' and/or 'plugins' directories for org.apache.flink.metrics.prometheus.PrometheusReporterFactory. It is recommended to remove redundant reporter JARs to resolve used versions' ambiguity. 2023-01-05 20:19:49,440 - 3230 WARN [main] org.apache.flink.runtime.metrics.ReporterSetup:Multiple implementations of the same reporter were found in 'lib' and/or 'plugins' directories for org.apache.flink.metrics.prometheus.PrometheusPushGatewayReporterFactory. It is recommended to remove redundant reporter JARs to resolve used versions' ambiguity. 2023-01-05 20:19:49,453 - 3243 INFO [main] org.apache.flink.metrics.prometheus.PrometheusPushGatewayReporter:Configured PrometheusPushGatewayReporter with {host:hadoop01, port:9091, jobName:flink-metrics-ppg019bd965e49f7a10cd58defb0fb0203c, randomJobNameSuffix:true, deleteOnShutdown:false, groupingKey:{}} 2023-01-05 20:19:49,454 - 3244 INFO [main] org.apache.flink.runtime.metrics.MetricRegistryImpl:Periodically reporting metrics in intervals of 30 s for reporter promgateway of type org.apache.flink.metrics.prometheus.PrometheusPushGatewayReporter. 2023-01-05 20:19:49,462 - 3252 INFO [main] org.apache.flink.runtime.clusterframework.BootstrapTools:Trying to start actor system, external address hadoop03:0, bind address 0.0.0.0:0. 2023-01-05 20:19:49,489 - 3279 INFO [flink-metrics-2] akka.event.slf4j.Slf4jLogger$$anonfun$receive$1:Slf4jLogger started 2023-01-05 20:19:49,495 - 3285 INFO [flink-metrics-2] org.slf4j.helpers.MarkerIgnoringBase:Starting remoting 2023-01-05 20:19:49,526 - 3316 INFO [flink-metrics-2] org.slf4j.helpers.MarkerIgnoringBase:Remoting started; listening on addresses :[akka.tcp://flink-metrics@hadoop03:39791] 2023-01-05 20:19:49,551 - 3341 INFO [main] org.apache.flink.runtime.clusterframework.BootstrapTools:Actor system started at akka.tcp://flink-metrics@hadoop03:39791 2023-01-05 20:19:49,580 - 3370 INFO [main] org.apache.flink.runtime.rpc.akka.AkkaRpcService:Starting RPC endpoint for org.apache.flink.runtime.metrics.dump.MetricQueryService at akka://flink-metrics/user/rpc/MetricQueryService_container_e19_1672710362889_0056_01_000014 . 2023-01-05 20:19:49,591 - 3381 INFO [main] org.apache.flink.runtime.blob.AbstractBlobCache:Created BLOB cache storage directory /data/tools/bigdata/zdata/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1672710362889_0056/blobStore-2d0d67f7-09d7-43de-a593-3021bf3e4c7f 2023-01-05 20:19:49,595 - 3385 INFO [main] org.apache.flink.runtime.blob.AbstractBlobCache:Created BLOB cache storage directory /data/tools/bigdata/zdata/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1672710362889_0056/blobStore-ebdf39c4-7ee6-468a-9e15-df81a5db77ac 2023-01-05 20:19:49,597 - 3387 INFO [main] org.apache.flink.runtime.externalresource.ExternalResourceUtils:Enabled external resources: [] 2023-01-05 20:19:49,597 - 3387 INFO [main] org.apache.flink.runtime.externalresource.ExternalResourceUtils:Enabled external resources: [] 2023-01-05 20:19:49,597 - 3387 INFO [main] org.apache.flink.runtime.taskexecutor.TaskManagerRunner:Starting TaskManager with ResourceID: container_e19_1672710362889_0056_01_000014(hadoop03:37894) 2023-01-05 20:19:49,665 - 3455 INFO [main] org.apache.flink.runtime.taskexecutor.TaskManagerServices:Temporary file directory '/data/tools/bigdata/zdata/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1672710362889_0056': total 77 GB, usable 50 GB (64.94% usable) 2023-01-05 20:19:49,670 - 3460 INFO [main] org.apache.flink.runtime.io.disk.FileChannelManagerImpl:FileChannelManager uses directory /data/tools/bigdata/zdata/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1672710362889_0056/flink-io-4e8ee6f4-1624-4f1f-bc71-527cda468a77 for spill files. 2023-01-05 20:19:49,684 - 3474 INFO [main] org.apache.flink.runtime.io.network.netty.NettyConfig:NettyConfig [server address: /0.0.0.0, server port: 0, ssl enabled: false, memory segment size (bytes): 32768, transport type: AUTO, number of server threads: 4 (manual), number of client threads: 4 (manual), server connect backlog: 0 (use Netty's default), client connect timeout (sec): 120, send/receive buffer size (bytes): 0 (use Netty's default)] 2023-01-05 20:19:49,687 - 3477 INFO [main] org.apache.flink.runtime.io.disk.FileChannelManagerImpl:FileChannelManager uses directory /data/tools/bigdata/zdata/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1672710362889_0056/flink-netty-shuffle-cfffb975-d727-4976-b019-2e1c96f8550e for spill files. 2023-01-05 20:19:49,840 - 3630 INFO [main] org.apache.flink.runtime.io.network.buffer.NetworkBufferPool:Allocated 128 MB for network buffer pool (number of memory segments: 4096, bytes per segment: 32768). 2023-01-05 20:19:49,853 - 3643 INFO [main] org.apache.flink.runtime.io.network.NettyShuffleEnvironment:Starting the network environment and its components. 2023-01-05 20:19:49,932 - 3722 INFO [main] org.apache.flink.runtime.io.network.netty.NettyClient:Transport type 'auto': using EPOLL. 2023-01-05 20:19:49,934 - 3724 INFO [main] org.apache.flink.runtime.io.network.netty.NettyClient:Successful initialization (took 81 ms). 2023-01-05 20:19:49,940 - 3730 INFO [main] org.apache.flink.runtime.io.network.netty.NettyServer:Transport type 'auto': using EPOLL. 2023-01-05 20:19:49,986 - 3776 INFO [main] org.apache.flink.runtime.io.network.netty.NettyServer:Successful initialization (took 49 ms). Listening on SocketAddress /0:0:0:0:0:0:0:0%0:43749. 2023-01-05 20:19:49,987 - 3777 INFO [main] org.apache.flink.runtime.taskexecutor.KvStateService:Starting the kvState service and its components. 2023-01-05 20:19:50,019 - 3809 INFO [main] org.apache.flink.runtime.rpc.akka.AkkaRpcService:Starting RPC endpoint for org.apache.flink.runtime.taskexecutor.TaskExecutor at akka://flink/user/rpc/taskmanager_0 . 2023-01-05 20:19:50,077 - 3867 INFO [flink-akka.actor.default-dispatcher-3] org.apache.flink.runtime.leaderretrieval.DefaultLeaderRetrievalService:Starting DefaultLeaderRetrievalService with ZookeeperLeaderRetrievalDriver{retrievalPath='/leader/resource_manager_lock'}. 2023-01-05 20:19:50,079 - 3869 INFO [flink-akka.actor.default-dispatcher-3] org.apache.flink.runtime.taskexecutor.DefaultJobLeaderService:Start job leader service. 2023-01-05 20:19:50,080 - 3870 INFO [flink-akka.actor.default-dispatcher-3] org.apache.flink.runtime.filecache.FileCache:User file cache uses directory /data/tools/bigdata/zdata/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1672710362889_0056/flink-dist-cache-89b0c753-c281-445b-8219-81d3bb8c70f2 2023-01-05 20:19:50,115 - 3905 INFO [flink-akka.actor.default-dispatcher-3] org.apache.flink.runtime.taskexecutor.TaskExecutor:Connecting to ResourceManager akka.tcp://flink@hadoop01:37521/user/rpc/resourcemanager_0(ad5887fd6fa69719ae99b05a20554c1d). 2023-01-05 20:19:50,307 - 4097 INFO [flink-akka.actor.default-dispatcher-3] org.apache.flink.runtime.registration.RetryingRegistration:Resolved ResourceManager address, beginning registration 2023-01-05 20:19:50,350 - 4140 INFO [flink-akka.actor.default-dispatcher-3] org.apache.flink.runtime.taskexecutor.TaskExecutorToResourceManagerConnection:Successful registration at resource manager akka.tcp://flink@hadoop01:37521/user/rpc/resourcemanager_0 under registration id 084db5a8e25f828566a77f7738761271. 2023-01-05 20:19:50,362 - 4152 INFO [flink-akka.actor.default-dispatcher-3] org.apache.flink.runtime.taskexecutor.TaskExecutor:Receive slot request ce4c5fd7e1a555d9b6f65b3af82d5d7a for job b2815e37452f75af87b0af7fa3547694 from resource manager with leader id ad5887fd6fa69719ae99b05a20554c1d. 2023-01-05 20:19:50,367 - 4157 INFO [flink-akka.actor.default-dispatcher-3] org.apache.flink.runtime.taskexecutor.TaskExecutor:Allocated slot for ce4c5fd7e1a555d9b6f65b3af82d5d7a. 2023-01-05 20:19:50,368 - 4158 INFO [flink-akka.actor.default-dispatcher-3] org.apache.flink.runtime.taskexecutor.DefaultJobLeaderService:Add job b2815e37452f75af87b0af7fa3547694 for job leader monitoring. 2023-01-05 20:19:50,370 - 4160 INFO [flink-akka.actor.default-dispatcher-3] org.apache.flink.runtime.leaderretrieval.DefaultLeaderRetrievalService:Starting DefaultLeaderRetrievalService with ZookeeperLeaderRetrievalDriver{retrievalPath='/leader/b2815e37452f75af87b0af7fa3547694/job_manager_lock'}. 2023-01-05 20:19:50,379 - 4169 INFO [main-EventThread] org.apache.flink.runtime.taskexecutor.DefaultJobLeaderService$JobManagerLeaderListener:Try to register at job manager akka.tcp://flink@hadoop01:37521/user/rpc/jobmanager_16 with leader id b5852f28-e63c-4a2c-a082-805345d36b45. 2023-01-05 20:19:50,395 - 4185 INFO [flink-akka.actor.default-dispatcher-4] org.apache.flink.runtime.registration.RetryingRegistration:Resolved JobManager address, beginning registration 2023-01-05 20:19:50,408 - 4198 INFO [flink-akka.actor.default-dispatcher-2] org.apache.flink.runtime.taskexecutor.DefaultJobLeaderService$JobManagerLeaderListener$JobManagerRegisteredRpcConnection:Successful registration at job manager akka.tcp://flink@hadoop01:37521/user/rpc/jobmanager_16 for job b2815e37452f75af87b0af7fa3547694. 2023-01-05 20:19:50,409 - 4199 INFO [flink-akka.actor.default-dispatcher-3] org.apache.flink.runtime.taskexecutor.TaskExecutor:Establish JobManager connection for job b2815e37452f75af87b0af7fa3547694. 2023-01-05 20:19:50,413 - 4203 INFO [flink-akka.actor.default-dispatcher-3] org.apache.flink.runtime.taskexecutor.TaskExecutor:Offer reserved slots to the leader of job b2815e37452f75af87b0af7fa3547694. 2023-01-05 20:19:50,434 - 4224 INFO [flink-akka.actor.default-dispatcher-17] org.apache.flink.runtime.taskexecutor.slot.TaskSlotTableImpl:Activate slot ce4c5fd7e1a555d9b6f65b3af82d5d7a. 2023-01-05 20:19:50,440 - 4230 INFO [flink-akka.actor.default-dispatcher-17] org.apache.flink.runtime.taskexecutor.slot.TaskSlotTableImpl:Activate slot ce4c5fd7e1a555d9b6f65b3af82d5d7a. 2023-01-05 20:19:50,478 - 4268 INFO [flink-akka.actor.default-dispatcher-17] org.apache.flink.runtime.taskexecutor.TaskExecutor:Received task Source: mysqlsourcefactory -> Sink: hivesinkfactory (1/1)#0 (1ec46e7c2ffe16586025fdba02b44f3f), deploy into slot with allocation id ce4c5fd7e1a555d9b6f65b3af82d5d7a. 2023-01-05 20:19:50,486 - 4276 INFO [Source: mysqlsourcefactory -> Sink: hivesinkfactory (1/1)#0] org.apache.flink.runtime.taskmanager.Task:Source: mysqlsourcefactory -> Sink: hivesinkfactory (1/1)#0 (1ec46e7c2ffe16586025fdba02b44f3f) switched from CREATED to DEPLOYING. 2023-01-05 20:19:50,489 - 4279 INFO [Source: mysqlsourcefactory -> Sink: hivesinkfactory (1/1)#0] org.apache.flink.runtime.taskmanager.Task:Loading JAR files for task Source: mysqlsourcefactory -> Sink: hivesinkfactory (1/1)#0 (1ec46e7c2ffe16586025fdba02b44f3f) [DEPLOYING]. 2023-01-05 20:19:50,502 - 4292 INFO [Source: mysqlsourcefactory -> Sink: hivesinkfactory (1/1)#0] org.apache.flink.runtime.taskmanager.Task:Registering task at network: Source: mysqlsourcefactory -> Sink: hivesinkfactory (1/1)#0 (1ec46e7c2ffe16586025fdba02b44f3f) [DEPLOYING]. 2023-01-05 20:19:50,539 - 4329 INFO [Source: mysqlsourcefactory -> Sink: hivesinkfactory (1/1)#0] org.apache.flink.runtime.state.StateBackendLoader:Using job/cluster config to configure application-defined state backend: File State Backend (checkpoints: 'hdfs://hacluster/flink112/flink-checkpoints', savepoints: 'hdfs://hacluster/flink112/flink-savepoints', asynchronous: TRUE, fileStateThreshold: 20480) 2023-01-05 20:19:50,539 - 4329 INFO [Source: mysqlsourcefactory -> Sink: hivesinkfactory (1/1)#0] org.apache.flink.runtime.state.StateBackendLoader:Using application-defined state backend: File State Backend (checkpoints: 'hdfs://hacluster/flink112/flink-checkpoints', savepoints: 'hdfs://hacluster/flink112/flink-savepoints', asynchronous: TRUE, fileStateThreshold: 20480) 2023-01-05 20:19:50,552 - 4342 INFO [Source: mysqlsourcefactory -> Sink: hivesinkfactory (1/1)#0] org.apache.flink.runtime.taskmanager.Task:Source: mysqlsourcefactory -> Sink: hivesinkfactory (1/1)#0 (1ec46e7c2ffe16586025fdba02b44f3f) switched from DEPLOYING to RUNNING. 2023-01-05 20:19:50,722 - 4512 INFO [Source: mysqlsourcefactory -> Sink: hivesinkfactory (1/1)#0] com.dtstack.chunjun.sink.DtOutputFormatSinkFunction:Start initialize output format state 2023-01-05 20:19:50,769 - 4559 INFO [Source: mysqlsourcefactory -> Sink: hivesinkfactory (1/1)#0] com.dtstack.chunjun.sink.DtOutputFormatSinkFunction:Is restored:false 2023-01-05 20:19:50,769 - 4559 INFO [Source: mysqlsourcefactory -> Sink: hivesinkfactory (1/1)#0] com.dtstack.chunjun.sink.DtOutputFormatSinkFunction:End initialize output format state 2023-01-05 20:19:50,769 - 4559 INFO [Source: mysqlsourcefactory -> Sink: hivesinkfactory (1/1)#0] com.dtstack.chunjun.connector.hive.sink.HiveOutputFormat:timeZone = sun.util.calendar.ZoneInfo[id="Asia/Shanghai",offset=28800000,dstSavings=0,useDaylight=false,transitions=29,lastRule=null] 2023-01-05 20:19:51,235 - 5025 INFO [Source: mysqlsourcefactory -> Sink: hivesinkfactory (1/1)#0] com.dtstack.chunjun.connector.hive.sink.HiveOutputFormat:tablePath:paste_source__sys_user, rowData:null, even:{schema=null, table=sys_user} 2023-01-05 20:19:51,419 - 5209 INFO [Source: mysqlsourcefactory -> Sink: hivesinkfactory (1/1)#0] org.apache.hive.jdbc.Utils:Supplied authorities: 192.168.7.101:10000 2023-01-05 20:19:51,420 - 5210 INFO [Source: mysqlsourcefactory -> Sink: hivesinkfactory (1/1)#0] org.apache.hive.jdbc.Utils:Resolved authority: 192.168.7.101:10000 2023-01-05 20:19:51,554 - 5344 ERROR [Source: mysqlsourcefactory -> Sink: hivesinkfactory (1/1)#0] org.apache.hive.jdbc.HiveConnection:Error opening session org.apache.thrift.TApplicationException: Required field 'client_protocol' is unset! Struct:TOpenSessionReq(client_protocol:null, configuration:{use:database=default}) at org.apache.thrift.TApplicationException.read(TApplicationException.java:111) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:79) at org.apache.hive.service.rpc.thrift.TCLIService$Client.recv_OpenSession(TCLIService.java:176) at org.apache.hive.service.rpc.thrift.TCLIService$Client.OpenSession(TCLIService.java:163) at org.apache.hive.jdbc.HiveConnection.openSession(HiveConnection.java:590) at org.apache.hive.jdbc.HiveConnection.(HiveConnection.java:172) at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:107) at java.sql.DriverManager.getConnection(DriverManager.java:664) at java.sql.DriverManager.getConnection(DriverManager.java:208) at com.dtstack.chunjun.connector.hive.util.HiveDbUtil.getHiveConnection(HiveDbUtil.java:257) at com.dtstack.chunjun.connector.hive.util.HiveDbUtil.connect(HiveDbUtil.java:211) at com.dtstack.chunjun.connector.hive.util.HiveDbUtil.connect(HiveDbUtil.java:203) at com.dtstack.chunjun.connector.hive.util.HiveDbUtil$1.call(HiveDbUtil.java:96) at com.dtstack.chunjun.connector.hive.util.HiveDbUtil$1.call(HiveDbUtil.java:93) at com.dtstack.chunjun.util.RetryUtil$Retry.call(RetryUtil.java:141) at com.dtstack.chunjun.util.RetryUtil$Retry.doRetry(RetryUtil.java:80) at com.dtstack.chunjun.util.RetryUtil.executeWithRetry(RetryUtil.java:56) at com.dtstack.chunjun.connector.hive.util.HiveDbUtil.getConnectionWithRetry(HiveDbUtil.java:92) at com.dtstack.chunjun.connector.hive.util.HiveDbUtil.getConnection(HiveDbUtil.java:86) at com.dtstack.chunjun.connector.hive.util.HiveUtil.createHiveTableWithTableInfo(HiveUtil.java:90) at com.dtstack.chunjun.connector.hive.sink.HiveOutputFormat.checkCreateTable(HiveOutputFormat.java:386) at com.dtstack.chunjun.connector.hive.sink.HiveOutputFormat.primaryCreateTable(HiveOutputFormat.java:357) at com.dtstack.chunjun.connector.hive.sink.HiveOutputFormat.openInternal(HiveOutputFormat.java:114) at com.dtstack.chunjun.sink.format.BaseRichOutputFormat.open(BaseRichOutputFormat.java:262) at com.dtstack.chunjun.connector.hive.sink.HiveOutputFormat.open(HiveOutputFormat.java:94) at com.dtstack.chunjun.sink.DtOutputFormatSinkFunction.open(DtOutputFormatSinkFunction.java:95) at org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:34) at org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.open(AbstractUdfStreamOperator.java:102) at org.apache.flink.streaming.api.operators.StreamSink.open(StreamSink.java:46) at org.apache.flink.streaming.runtime.tasks.OperatorChain.initializeStateAndOpenOperators(OperatorChain.java:433) at org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$beforeInvoke$2(StreamTask.java:545) at org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$SynchronizedStreamTaskActionExecutor.runThrowing(StreamTaskActionExecutor.java:93) at org.apache.flink.streaming.runtime.tasks.StreamTask.beforeInvoke(StreamTask.java:535) at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:575) at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:758) at org.apache.flink.runtime.taskmanager.Task.run(Task.java:573) at java.lang.Thread.run(Thread.java:748) 2023-01-05 20:19:51,557 - 5347 ERROR [Source: mysqlsourcefactory -> Sink: hivesinkfactory (1/1)#0] com.dtstack.chunjun.util.RetryUtil$Retry:Exception when calling callable, 异常Msg:com.dtstack.chunjun.throwable.ChunJunRuntimeException: connection info :jdbc:hive2://192.168.7.101:10000/default error message :java.sql.SQLException: Could not establish connection to jdbc:hive2://192.168.7.101:10000/: Required field 'client_protocol' is unset! Struct:TOpenSessionReq(client_protocol:null, configuration:{use:database=default}) at org.apache.hive.jdbc.HiveConnection.openSession(HiveConnection.java:601) at org.apache.hive.jdbc.HiveConnection.(HiveConnection.java:172) at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:107) at java.sql.DriverManager.getConnection(DriverManager.java:664) at java.sql.DriverManager.getConnection(DriverManager.java:208) at com.dtstack.chunjun.connector.hive.util.HiveDbUtil.getHiveConnection(HiveDbUtil.java:257) at com.dtstack.chunjun.connector.hive.util.HiveDbUtil.connect(HiveDbUtil.java:211) at com.dtstack.chunjun.connector.hive.util.HiveDbUtil.connect(HiveDbUtil.java:203) at com.dtstack.chunjun.connector.hive.util.HiveDbUtil$1.call(HiveDbUtil.java:96) at com.dtstack.chunjun.connector.hive.util.HiveDbUtil$1.call(HiveDbUtil.java:93) at com.dtstack.chunjun.util.RetryUtil$Retry.call(RetryUtil.java:141) at com.dtstack.chunjun.util.RetryUtil$Retry.doRetry(RetryUtil.java:80) at com.dtstack.chunjun.util.RetryUtil.executeWithRetry(RetryUtil.java:56) at com.dtstack.chunjun.connector.hive.util.HiveDbUtil.getConnectionWithRetry(HiveDbUtil.java:92) at com.dtstack.chunjun.connector.hive.util.HiveDbUtil.getConnection(HiveDbUtil.java:86) at com.dtstack.chunjun.connector.hive.util.HiveUtil.createHiveTableWithTableInfo(HiveUtil.java:90) at com.dtstack.chunjun.connector.hive.sink.HiveOutputFormat.checkCreateTable(HiveOutputFormat.java:386) at com.dtstack.chunjun.connector.hive.sink.HiveOutputFormat.primaryCreateTable(HiveOutputFormat.java:357) at com.dtstack.chunjun.connector.hive.sink.HiveOutputFormat.openInternal(HiveOutputFormat.java:114) at com.dtstack.chunjun.sink.format.BaseRichOutputFormat.open(BaseRichOutputFormat.java:262) at com.dtstack.chunjun.connector.hive.sink.HiveOutputFormat.open(HiveOutputFormat.java:94) at com.dtstack.chunjun.sink.DtOutputFormatSinkFunction.open(DtOutputFormatSinkFunction.java:95) at org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:34) at org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.open(AbstractUdfStreamOperator.java:102) at org.apache.flink.streaming.api.operators.StreamSink.open(StreamSink.java:46) at org.apache.flink.streaming.runtime.tasks.OperatorChain.initializeStateAndOpenOperators(OperatorChain.java:433) at org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$beforeInvoke$2(StreamTask.java:545) at org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$SynchronizedStreamTaskActionExecutor.runThrowing(StreamTaskActionExecutor.java:93) at org.apache.flink.streaming.runtime.tasks.StreamTask.beforeInvoke(StreamTask.java:535) at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:575) at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:758) at org.apache.flink.runtime.taskmanager.Task.run(Task.java:573) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.thrift.TApplicationException: Required field 'client_protocol' is unset! Struct:TOpenSessionReq(client_protocol:null, configuration:{use:database=default}) at org.apache.thrift.TApplicationException.read(TApplicationException.java:111) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:79) at org.apache.hive.service.rpc.thrift.TCLIService$Client.recv_OpenSession(TCLIService.java:176) at org.apache.hive.service.rpc.thrift.TCLIService$Client.OpenSession(TCLIService.java:163) at org.apache.hive.jdbc.HiveConnection.openSession(HiveConnection.java:590) ... 32 more

at com.dtstack.chunjun.connector.hive.util.HiveDbUtil.connect(HiveDbUtil.java:222)
at com.dtstack.chunjun.connector.hive.util.HiveDbUtil.connect(HiveDbUtil.java:203)
at com.dtstack.chunjun.connector.hive.util.HiveDbUtil$1.call(HiveDbUtil.java:96)
at com.dtstack.chunjun.connector.hive.util.HiveDbUtil$1.call(HiveDbUtil.java:93)
at com.dtstack.chunjun.util.RetryUtil$Retry.call(RetryUtil.java:141)
at com.dtstack.chunjun.util.RetryUtil$Retry.doRetry(RetryUtil.java:80)
at com.dtstack.chunjun.util.RetryUtil.executeWithRetry(RetryUtil.java:56)
at com.dtstack.chunjun.connector.hive.util.HiveDbUtil.getConnectionWithRetry(HiveDbUtil.java:92)
at com.dtstack.chunjun.connector.hive.util.HiveDbUtil.getConnection(HiveDbUtil.java:86)
at com.dtstack.chunjun.connector.hive.util.HiveUtil.createHiveTableWithTableInfo(HiveUtil.java:90)
at com.dtstack.chunjun.connector.hive.sink.HiveOutputFormat.checkCreateTable(HiveOutputFormat.java:386)
at com.dtstack.chunjun.connector.hive.sink.HiveOutputFormat.primaryCreateTable(HiveOutputFormat.java:357)
at com.dtstack.chunjun.connector.hive.sink.HiveOutputFormat.openInternal(HiveOutputFormat.java:114)
at com.dtstack.chunjun.sink.format.BaseRichOutputFormat.open(BaseRichOutputFormat.java:262)
at com.dtstack.chunjun.connector.hive.sink.HiveOutputFormat.open(HiveOutputFormat.java:94)
at com.dtstack.chunjun.sink.DtOutputFormatSinkFunction.open(DtOutputFormatSinkFunction.java:95)
at org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:34)
at org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.open(AbstractUdfStreamOperator.java:102)
at org.apache.flink.streaming.api.operators.StreamSink.open(StreamSink.java:46)
at org.apache.flink.streaming.runtime.tasks.OperatorChain.initializeStateAndOpenOperators(OperatorChain.java:433)
at org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$beforeInvoke$2(StreamTask.java:545)
at org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$SynchronizedStreamTaskActionExecutor.runThrowing(StreamTaskActionExecutor.java:93)
at org.apache.flink.streaming.runtime.tasks.StreamTask.beforeInvoke(StreamTask.java:535)
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:575)
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:758)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:573)
at java.lang.Thread.run(Thread.java:748)

com.dtstack.chunjun.throwable.ChunJunRuntimeException: connection info :jdbc:hive2://192.168.7.101:10000/default error message :java.sql.SQLException: Could not establish connection to jdbc:hive2://192.168.7.101:10000/: Required field 'client_protocol' is unset! Struct:TOpenSessionReq(client_protocol:null, configuration:{use:database=default}) at org.apache.hive.jdbc.HiveConnection.openSession(HiveConnection.java:601) at org.apache.hive.jdbc.HiveConnection.(HiveConnection.java:172) at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:107) at java.sql.DriverManager.getConnection(DriverManager.java:664) at java.sql.DriverManager.getConnection(DriverManager.java:208) at com.dtstack.chunjun.connector.hive.util.HiveDbUtil.getHiveConnection(HiveDbUtil.java:257) at com.dtstack.chunjun.connector.hive.util.HiveDbUtil.connect(HiveDbUtil.java:211) at com.dtstack.chunjun.connector.hive.util.HiveDbUtil.connect(HiveDbUtil.java:203) at com.dtstack.chunjun.connector.hive.util.HiveDbUtil$1.call(HiveDbUtil.java:96) at com.dtstack.chunjun.connector.hive.util.HiveDbUtil$1.call(HiveDbUtil.java:93) at com.dtstack.chunjun.util.RetryUtil$Retry.call(RetryUtil.java:141) at com.dtstack.chunjun.util.RetryUtil$Retry.doRetry(RetryUtil.java:80) at com.dtstack.chunjun.util.RetryUtil.executeWithRetry(RetryUtil.java:56) at com.dtstack.chunjun.connector.hive.util.HiveDbUtil.getConnectionWithRetry(HiveDbUtil.java:92) at com.dtstack.chunjun.connector.hive.util.HiveDbUtil.getConnection(HiveDbUtil.java:86) at com.dtstack.chunjun.connector.hive.util.HiveUtil.createHiveTableWithTableInfo(HiveUtil.java:90) at com.dtstack.chunjun.connector.hive.sink.HiveOutputFormat.checkCreateTable(HiveOutputFormat.java:386) at com.dtstack.chunjun.connector.hive.sink.HiveOutputFormat.primaryCreateTable(HiveOutputFormat.java:357) at com.dtstack.chunjun.connector.hive.sink.HiveOutputFormat.openInternal(HiveOutputFormat.java:114) at com.dtstack.chunjun.sink.format.BaseRichOutputFormat.open(BaseRichOutputFormat.java:262) at com.dtstack.chunjun.connector.hive.sink.HiveOutputFormat.open(HiveOutputFormat.java:94) at com.dtstack.chunjun.sink.DtOutputFormatSinkFunction.open(DtOutputFormatSinkFunction.java:95) at org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:34) at org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.open(AbstractUdfStreamOperator.java:102) at org.apache.flink.streaming.api.operators.StreamSink.open(StreamSink.java:46) at org.apache.flink.streaming.runtime.tasks.OperatorChain.initializeStateAndOpenOperators(OperatorChain.java:433) at org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$beforeInvoke$2(StreamTask.java:545) at org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$SynchronizedStreamTaskActionExecutor.runThrowing(StreamTaskActionExecutor.java:93) at org.apache.flink.streaming.runtime.tasks.StreamTask.beforeInvoke(StreamTask.java:535) at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:575) at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:758) at org.apache.flink.runtime.taskmanager.Task.run(Task.java:573) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.thrift.TApplicationException: Required field 'client_protocol' is unset! Struct:TOpenSessionReq(client_protocol:null, configuration:{use:database=default}) at org.apache.thrift.TApplicationException.read(TApplicationException.java:111) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:79) at org.apache.hive.service.rpc.thrift.TCLIService$Client.recv_OpenSession(TCLIService.java:176) at org.apache.hive.service.rpc.thrift.TCLIService$Client.OpenSession(TCLIService.java:163) at org.apache.hive.jdbc.HiveConnection.openSession(HiveConnection.java:590) ... 32 more

at com.dtstack.chunjun.connector.hive.util.HiveDbUtil.connect(HiveDbUtil.java:222)
at com.dtstack.chunjun.connector.hive.util.HiveDbUtil.connect(HiveDbUtil.java:203)
at com.dtstack.chunjun.connector.hive.util.HiveDbUtil$1.call(HiveDbUtil.java:96)
at com.dtstack.chunjun.connector.hive.util.HiveDbUtil$1.call(HiveDbUtil.java:93)
at com.dtstack.chunjun.util.RetryUtil$Retry.call(RetryUtil.java:141)
at com.dtstack.chunjun.util.RetryUtil$Retry.doRetry(RetryUtil.java:80)
at com.dtstack.chunjun.util.RetryUtil.executeWithRetry(RetryUtil.java:56)
at com.dtstack.chunjun.connector.hive.util.HiveDbUtil.getConnectionWithRetry(HiveDbUtil.java:92)
at com.dtstack.chunjun.connector.hive.util.HiveDbUtil.getConnection(HiveDbUtil.java:86)
at com.dtstack.chunjun.connector.hive.util.HiveUtil.createHiveTableWithTableInfo(HiveUtil.java:90)
at com.dtstack.chunjun.connector.hive.sink.HiveOutputFormat.checkCreateTable(HiveOutputFormat.java:386)
at com.dtstack.chunjun.connector.hive.sink.HiveOutputFormat.primaryCreateTable(HiveOutputFormat.java:357)
at com.dtstack.chunjun.connector.hive.sink.HiveOutputFormat.openInternal(HiveOutputFormat.java:114)
at com.dtstack.chunjun.sink.format.BaseRichOutputFormat.open(BaseRichOutputFormat.java:262)
at com.dtstack.chunjun.connector.hive.sink.HiveOutputFormat.open(HiveOutputFormat.java:94)
at com.dtstack.chunjun.sink.DtOutputFormatSinkFunction.open(DtOutputFormatSinkFunction.java:95)
at org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:34)
at org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.open(AbstractUdfStreamOperator.java:102)
at org.apache.flink.streaming.api.operators.StreamSink.open(StreamSink.java:46)
at org.apache.flink.streaming.runtime.tasks.OperatorChain.initializeStateAndOpenOperators(OperatorChain.java:433)
at org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$beforeInvoke$2(StreamTask.java:545)
at org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$SynchronizedStreamTaskActionExecutor.runThrowing(StreamTaskActionExecutor.java:93)
at org.apache.flink.streaming.runtime.tasks.StreamTask.beforeInvoke(StreamTask.java:535)
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:575)
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:758)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:573)
at java.lang.Thread.run(Thread.java:748)

2023-01-05 20:19:51,558 - 5348 ERROR [Source: mysqlsourcefactory -> Sink: hivesinkfactory (1/1)#0] com.dtstack.chunjun.connector.hive.util.HiveUtil: java.lang.RuntimeException: connect:jdbc:hive2://192.168.7.101:10000/default failed :java.lang.RuntimeException: com.dtstack.chunjun.throwable.ChunJunRuntimeException: connection info :jdbc:hive2://192.168.7.101:10000/default error message :java.sql.SQLException: Could not establish connection to jdbc:hive2://192.168.7.101:10000/: Required field 'client_protocol' is unset! Struct:TOpenSessionReq(client_protocol:null, configuration:{use:database=default}) at org.apache.hive.jdbc.HiveConnection.openSession(HiveConnection.java:601) at org.apache.hive.jdbc.HiveConnection.(HiveConnection.java:172) at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:107) at java.sql.DriverManager.getConnection(DriverManager.java:664) at java.sql.DriverManager.getConnection(DriverManager.java:208) at com.dtstack.chunjun.connector.hive.util.HiveDbUtil.getHiveConnection(HiveDbUtil.java:257) at com.dtstack.chunjun.connector.hive.util.HiveDbUtil.connect(HiveDbUtil.java:211) at com.dtstack.chunjun.connector.hive.util.HiveDbUtil.connect(HiveDbUtil.java:203) at com.dtstack.chunjun.connector.hive.util.HiveDbUtil$1.call(HiveDbUtil.java:96) at com.dtstack.chunjun.connector.hive.util.HiveDbUtil$1.call(HiveDbUtil.java:93) at com.dtstack.chunjun.util.RetryUtil$Retry.call(RetryUtil.java:141) at com.dtstack.chunjun.util.RetryUtil$Retry.doRetry(RetryUtil.java:80) at com.dtstack.chunjun.util.RetryUtil.executeWithRetry(RetryUtil.java:56) at com.dtstack.chunjun.connector.hive.util.HiveDbUtil.getConnectionWithRetry(HiveDbUtil.java:92) at com.dtstack.chunjun.connector.hive.util.HiveDbUtil.getConnection(HiveDbUtil.java:86) at com.dtstack.chunjun.connector.hive.util.HiveUtil.createHiveTableWithTableInfo(HiveUtil.java:90) at com.dtstack.chunjun.connector.hive.sink.HiveOutputFormat.checkCreateTable(HiveOutputFormat.java:386) at com.dtstack.chunjun.connector.hive.sink.HiveOutputFormat.primaryCreateTable(HiveOutputFormat.java:357) at com.dtstack.chunjun.connector.hive.sink.HiveOutputFormat.openInternal(HiveOutputFormat.java:114) at com.dtstack.chunjun.sink.format.BaseRichOutputFormat.open(BaseRichOutputFormat.java:262) at com.dtstack.chunjun.connector.hive.sink.HiveOutputFormat.open(HiveOutputFormat.java:94) at com.dtstack.chunjun.sink.DtOutputFormatSinkFunction.open(DtOutputFormatSinkFunction.java:95) at org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:34) at org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.open(AbstractUdfStreamOperator.java:102) at org.apache.flink.streaming.api.operators.StreamSink.open(StreamSink.java:46) at org.apache.flink.streaming.runtime.tasks.OperatorChain.initializeStateAndOpenOperators(OperatorChain.java:433) at org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$beforeInvoke$2(StreamTask.java:545) at org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$SynchronizedStreamTaskActionExecutor.runThrowing(StreamTaskActionExecutor.java:93) at org.apache.flink.streaming.runtime.tasks.StreamTask.beforeInvoke(StreamTask.java:535) at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:575) at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:758) at org.apache.flink.runtime.taskmanager.Task.run(Task.java:573) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.thrift.TApplicationException: Required field 'client_protocol' is unset! Struct:TOpenSessionReq(client_protocol:null, configuration:{use:database=default}) at org.apache.thrift.TApplicationException.read(TApplicationException.java:111) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:79) at org.apache.hive.service.rpc.thrift.TCLIService$Client.recv_OpenSession(TCLIService.java:176) at org.apache.hive.service.rpc.thrift.TCLIService$Client.OpenSession(TCLIService.java:163) at org.apache.hive.jdbc.HiveConnection.openSession(HiveConnection.java:590) ... 32 more

at com.dtstack.chunjun.util.RetryUtil$Retry.doRetry(RetryUtil.java:137)
at com.dtstack.chunjun.util.RetryUtil.executeWithRetry(RetryUtil.java:56)
at com.dtstack.chunjun.connector.hive.util.HiveDbUtil.getConnectionWithRetry(HiveDbUtil.java:92)
at com.dtstack.chunjun.connector.hive.util.HiveDbUtil.getConnection(HiveDbUtil.java:86)
at com.dtstack.chunjun.connector.hive.util.HiveUtil.createHiveTableWithTableInfo(HiveUtil.java:90)
at com.dtstack.chunjun.connector.hive.sink.HiveOutputFormat.checkCreateTable(HiveOutputFormat.java:386)
at com.dtstack.chunjun.connector.hive.sink.HiveOutputFormat.primaryCreateTable(HiveOutputFormat.java:357)
at com.dtstack.chunjun.connector.hive.sink.HiveOutputFormat.openInternal(HiveOutputFormat.java:114)
at com.dtstack.chunjun.sink.format.BaseRichOutputFormat.open(BaseRichOutputFormat.java:262)
at com.dtstack.chunjun.connector.hive.sink.HiveOutputFormat.open(HiveOutputFormat.java:94)
at com.dtstack.chunjun.sink.DtOutputFormatSinkFunction.open(DtOutputFormatSinkFunction.java:95)
at org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:34)
at org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.open(AbstractUdfStreamOperator.java:102)
at org.apache.flink.streaming.api.operators.StreamSink.open(StreamSink.java:46)
at org.apache.flink.streaming.runtime.tasks.OperatorChain.initializeStateAndOpenOperators(OperatorChain.java:433)
at org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$beforeInvoke$2(StreamTask.java:545)
at org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$SynchronizedStreamTaskActionExecutor.runThrowing(StreamTaskActionExecutor.java:93)
at org.apache.flink.streaming.runtime.tasks.StreamTask.beforeInvoke(StreamTask.java:535)
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:575)
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:758)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:573)
at java.lang.Thread.run(Thread.java:748)

Caused by: com.dtstack.chunjun.throwable.ChunJunRuntimeException: connection info :jdbc:hive2://192.168.7.101:10000/default error message :java.sql.SQLException: Could not establish connection to jdbc:hive2://192.168.7.101:10000/: Required field 'client_protocol' is unset! Struct:TOpenSessionReq(client_protocol:null, configuration:{use:database=default}) at org.apache.hive.jdbc.HiveConnection.openSession(HiveConnection.java:601) at org.apache.hive.jdbc.HiveConnection.(HiveConnection.java:172) at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:107) at java.sql.DriverManager.getConnection(DriverManager.java:664) at java.sql.DriverManager.getConnection(DriverManager.java:208) at com.dtstack.chunjun.connector.hive.util.HiveDbUtil.getHiveConnection(HiveDbUtil.java:257) at com.dtstack.chunjun.connector.hive.util.HiveDbUtil.connect(HiveDbUtil.java:211) at com.dtstack.chunjun.connector.hive.util.HiveDbUtil.connect(HiveDbUtil.java:203) at com.dtstack.chunjun.connector.hive.util.HiveDbUtil$1.call(HiveDbUtil.java:96) at com.dtstack.chunjun.connector.hive.util.HiveDbUtil$1.call(HiveDbUtil.java:93) at com.dtstack.chunjun.util.RetryUtil$Retry.call(RetryUtil.java:141) at com.dtstack.chunjun.util.RetryUtil$Retry.doRetry(RetryUtil.java:80) at com.dtstack.chunjun.util.RetryUtil.executeWithRetry(RetryUtil.java:56) at com.dtstack.chunjun.connector.hive.util.HiveDbUtil.getConnectionWithRetry(HiveDbUtil.java:92) at com.dtstack.chunjun.connector.hive.util.HiveDbUtil.getConnection(HiveDbUtil.java:86) at com.dtstack.chunjun.connector.hive.util.HiveUtil.createHiveTableWithTableInfo(HiveUtil.java:90) at com.dtstack.chunjun.connector.hive.sink.HiveOutputFormat.checkCreateTable(HiveOutputFormat.java:386) at com.dtstack.chunjun.connector.hive.sink.HiveOutputFormat.primaryCreateTable(HiveOutputFormat.java:357) at com.dtstack.chunjun.connector.hive.sink.HiveOutputFormat.openInternal(HiveOutputFormat.java:114) at com.dtstack.chunjun.sink.format.BaseRichOutputFormat.open(BaseRichOutputFormat.java:262) at com.dtstack.chunjun.connector.hive.sink.HiveOutputFormat.open(HiveOutputFormat.java:94) at com.dtstack.chunjun.sink.DtOutputFormatSinkFunction.open(DtOutputFormatSinkFunction.java:95) at org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:34) at org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.open(AbstractUdfStreamOperator.java:102) at org.apache.flink.streaming.api.operators.StreamSink.open(StreamSink.java:46) at org.apache.flink.streaming.runtime.tasks.OperatorChain.initializeStateAndOpenOperators(OperatorChain.java:433) at org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$beforeInvoke$2(StreamTask.java:545) at org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$SynchronizedStreamTaskActionExecutor.runThrowing(StreamTaskActionExecutor.java:93) at org.apache.flink.streaming.runtime.tasks.StreamTask.beforeInvoke(StreamTask.java:535) at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:575) at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:758) at org.apache.flink.runtime.taskmanager.Task.run(Task.java:573) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.thrift.TApplicationException: Required field 'client_protocol' is unset! Struct:TOpenSessionReq(client_protocol:null, configuration:{use:database=default}) at org.apache.thrift.TApplicationException.read(TApplicationException.java:111) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:79) at org.apache.hive.service.rpc.thrift.TCLIService$Client.recv_OpenSession(TCLIService.java:176) at org.apache.hive.service.rpc.thrift.TCLIService$Client.OpenSession(TCLIService.java:163) at org.apache.hive.jdbc.HiveConnection.openSession(HiveConnection.java:590) ... 32 more

at com.dtstack.chunjun.connector.hive.util.HiveDbUtil.connect(HiveDbUtil.java:222)
at com.dtstack.chunjun.connector.hive.util.HiveDbUtil.connect(HiveDbUtil.java:203)
at com.dtstack.chunjun.connector.hive.util.HiveDbUtil$1.call(HiveDbUtil.java:96)
at com.dtstack.chunjun.connector.hive.util.HiveDbUtil$1.call(HiveDbUtil.java:93)
at com.dtstack.chunjun.util.RetryUtil$Retry.call(RetryUtil.java:141)
at com.dtstack.chunjun.util.RetryUtil$Retry.doRetry(RetryUtil.java:80)
... 21 more

. at com.dtstack.chunjun.connector.hive.util.HiveDbUtil.getConnectionWithRetry(HiveDbUtil.java:104) at com.dtstack.chunjun.connector.hive.util.HiveDbUtil.getConnection(HiveDbUtil.java:86) at com.dtstack.chunjun.connector.hive.util.HiveUtil.createHiveTableWithTableInfo(HiveUtil.java:90) at com.dtstack.chunjun.connector.hive.sink.HiveOutputFormat.checkCreateTable(HiveOutputFormat.java:386) at com.dtstack.chunjun.connector.hive.sink.HiveOutputFormat.primaryCreateTable(HiveOutputFormat.java:357) at com.dtstack.chunjun.connector.hive.sink.HiveOutputFormat.openInternal(HiveOutputFormat.java:114) at com.dtstack.chunjun.sink.format.BaseRichOutputFormat.open(BaseRichOutputFormat.java:262) at com.dtstack.chunjun.connector.hive.sink.HiveOutputFormat.open(HiveOutputFormat.java:94) at com.dtstack.chunjun.sink.DtOutputFormatSinkFunction.open(DtOutputFormatSinkFunction.java:95) at org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:34) at org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.open(AbstractUdfStreamOperator.java:102) at org.apache.flink.streaming.api.operators.StreamSink.open(StreamSink.java:46) at org.apache.flink.streaming.runtime.tasks.OperatorChain.initializeStateAndOpenOperators(OperatorChain.java:433) at org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$beforeInvoke$2(StreamTask.java:545) at org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$SynchronizedStreamTaskActionExecutor.runThrowing(StreamTaskActionExecutor.java:93) at org.apache.flink.streaming.runtime.tasks.StreamTask.beforeInvoke(StreamTask.java:535) at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:575) at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:758) at org.apache.flink.runtime.taskmanager.Task.run(Task.java:573) at java.lang.Thread.run(Thread.java:748) 2023-01-05 20:19:51,561 - 5351 INFO [Source: mysqlsourcefactory -> Sink: hivesinkfactory (1/1)#0] com.dtstack.chunjun.sink.format.BaseRichOutputFormat:taskNumber[0] close()

psvmc commented 1 year ago

使用的是apache hive2.1.0

psvmc commented 1 year ago

[root@hadoop01 ~]# hive SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/data/tools/bigdata/apache-hive-2.1.0-bin/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/data/tools/bigdata/hadoop-2.7.7/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]

Logging initialized using configuration in jar:file:/data/tools/bigdata/apache-hive-2.1.0-bin/lib/hive-common-2.1.0.jar!/hive-log4j2.properties Async: true Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases. hive (default)>

psvmc commented 1 year ago

hive 是什么版本,apache hive2.1 吗?还是spark thrift server 呢?任务脚本及日志提供下,谢谢

日志提供过了

psvmc commented 1 year ago
{
  "job": {
    "content": [
      {
        "reader": {
          "name": "mysqlreader",
          "parameter": {
            "column": [
              {
                "name": "ID",
                "type": "varchar"
              },
              {
                "name": "DIR_ID",
                "type": "varchar"
              },
              {
                "name": "UNAME",
                "type": "varchar"
              },
              {
                "name": "UPWD",
                "type": "varchar"
              },
              {
                "name": "RNAME",
                "type": "varchar"
              },
              {
                "name": "MOBILE",
                "type": "varchar"
              },
              {
                "name": "EMAIL",
                "type": "varchar"
              },
              {
                "name": "LOGIN_DENIED",
                "type": "int"
              },
              {
                "name": "STATUS",
                "type": "int"
              },
              {
                "name": "CREATED_BY",
                "type": "varchar"
              },
              {
                "name": "CREATED_TIME",
                "type": "datetime"
              },
              {
                "name": "UPDATED_BY",
                "type": "varchar"
              },
              {
                "name": "UPDATED_TIME",
                "type": "datetime"
              },
              {
                "name": "REVISION",
                "type": "int"
              }
            ],
            "username": "root",
            "password": "123456",
            "connection": [
              {
                "jdbcUrl": [
                  "jdbc:mysql://192.168.7.102/yxdp?useSSL=false"
                ],
                "table": [
                  "sys_user"
                ]
              }
            ]
          }
        },
        "writer": {
          "name": "hivewriter",
          "parameter": {
            "jdbcUrl": "jdbc:hive2://192.168.7.101:10000/default",
            "username": "",
            "password": "",
            "fileType": "text",
            "fieldDelimiter": ",",
            "writeMode": "overwrite",
            "compress": "",
            "charsetName": "UTF-8",
            "maxFileSize": 1073741824,
            "analyticalRules": "paste_source_${schema}_${table}",
            "tablesColumn": "{\"sys_user\":[{\"comment\":\"\",\"type\":\"varchar\",\"key\":\"ID\"},{\"comment\":\"\",\"type\":\"varchar\",\"key\":\"DIR_ID\"},{\"comment\":\"\",\"type\":\"varchar\",\"key\":\"UNAME\"},{\"comment\":\"\",\"type\":\"varchar\",\"key\":\"UPWD\"},{\"comment\":\"\",\"type\":\"varchar\",\"key\":\"RNAME\"},{\"comment\":\"\",\"type\":\"varchar\",\"key\":\"MOBILE\"},{\"comment\":\"\",\"type\":\"varchar\",\"key\":\"EMAIL\"},{\"comment\":\"\",\"type\":\"int\",\"key\":\"LOGIN_DENIED\"},{\"comment\":\"\",\"type\":\"int\",\"key\":\"STATUS\"},{\"comment\":\"\",\"type\":\"varchar\",\"key\":\"CREATED_BY\"},{\"comment\":\"\",\"type\":\"datetime\",\"key\":\"CREATED_TIME\"},{\"comment\":\"\",\"type\":\"varchar\",\"key\":\"UPDATED_BY\"},{\"comment\":\"\",\"type\":\"datetime\",\"key\":\"UPDATED_TIME\"},{\"comment\":\"\",\"type\":\"int\",\"key\":\"REVISION\"}]}",
            "partition": "pt",
            "partitionType": "MINUTE",
            "defaultFS": "hdfs://hacluster",
            "hadoopConfig": {
              "dfs.ha.namenodes.ns": "nn1,nn2",
              "fs.defaultFS": "hdfs://hacluster",
              "dfs.namenode.rpc-address.hacluster.nn2": "192.168.7.102:9000",
              "dfs.client.failover.proxy.provider.ns": "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider",
              "dfs.namenode.rpc-address.hacluster.nn1": "192.168.7.101:9000",
              "dfs.nameservices": "hacluster",
              "fs.hdfs.impl.disable.cache": "true",
              "hadoop.user.name": "root",
              "fs.hdfs.impl": "org.apache.hadoop.hdfs.DistributedFileSystem"
            }
          }
        }
      }
    ],
    "setting": {
      "speed": {
        "channel": 1,
        "bytes": 0
      }
    }
  }
}
715484127 commented 10 months ago

遇到同样问题,请问解决了么