jeff-zou / flink-connector-redis

Asynchronous flink connector based on the Lettuce, supporting sql join and sink, query caching and debugging.
Apache License 2.0
220 stars 80 forks source link

报错java.lang.ClassNotFoundException: io.netty.channel.ChannelHandler类找不到 #53

Closed WhiteLie98 closed 9 months ago

WhiteLie98 commented 9 months ago

您好! 我是最近才接触flink sql的新人,上来就要使用redis的get功能,所以找到了您的项目,十分感谢!

运行flink sql的时候在启动环节出现了报错: Caused by: org.apache.flink.table.api.ValidationException: Cannot discover a connector using option: 'connector'='redis' Caused by: org.apache.flink.table.api.ValidationException: Could not find any factory for identifier 'redis' that implements 'org.apache.flink.table.factories.DynamicTableFactory' in the classpath. Available factory identifiers are: blackhole datagen filesystem jdbc kafka mysql-cdc paimon print starrocks upsert-kafka

我这边运行的flink版本是1.15.3,使用的是flink-connector-redis-1.3.1-jar-with-dependencies.jar(下载该版本的source文件进行打包的)。主要是想get键值对,因此只用作source。

麻烦您辛苦帮忙看一下~感谢!

jeff-zou commented 9 months ago

暂时不支持source,只支持left join, 后面几天我会把source支持加上。

WhiteLie98 commented 8 months ago

您的意思是目前不支持只从redis中get数据然后写到其他数据库吗?

还有个疑惑,目前我用redis获取数据并与kafka的数据碰撞,出现了java.lang.ClassNotFoundException报错,即这个类找不到org.apache.flink.calcite.shaded.com.google.common.cache.Cache。 无论是直接用您1.3.1的release版本jar,还是进行去test编译后的依赖包,都有这个提示……

现有flink环境的lib文件夹: 1709018750695

具体脱敏后代码如下:

CREATE DATABASE IF NOT EXISTS test_source;
CREATE DATABASE IF NOT EXISTS test_sink;

CREATE TABLE test_source.test_redis (
  redis_key string,
  redis_value string
) WITH (
  'connector' = 'redis',
  'password' = 'XXXXX',
  'host' = 'XXXXX',
  'port' = 'XXXXX',
  'redis-mode' = 'single',
  'command'='get',
  'maxIdle'='2',
  'minIdle'='1',
  'lookup.cache.max-rows'='10',
  'lookup.cache.ttl'='10',
  'lookup.max-retries'='3'
);

CREATE TABLE test_source.test_kafka (
  kafka_key varchar(50),
  kafka_value varchar(50),
  pt AS PROCTIME()
) WITH (
  'connector' = 'kafka',
  'topic' = 'topicXXXX',
  'properties.bootstrap.servers' = 'XXXXX:XXXXX',
  'properties.group.id' = 'XXXX',
  'scan.startup.mode' = 'latest-offset',
  'format' = 'json',
  'json.ignore-parse-errors'='true',
  'properties.security.protocol' = 'SASL_PLAINTEXT',
  'properties.sasl.mechanism' = 'SCRAM-SHA-256',
  'properties.sasl.jaas.config' = 'org.apache.flink.kafka.shaded.org.apache.kafka.common.security.scram.ScramLoginModule required username="XXXX" password="XXXX";'
  );

CREATE TABLE IF NOT EXISTS test_sink.test_sr_kafka2redis (
  sr_key varchar(50),
  sr_value varchar(50),
  PRIMARY KEY (sr_key) NOT ENFORCED
) WITH (
  'connector' = 'starrocks',
  'jdbc-url'='jdbc:mysql://XXXXX:XXXXX?characterEncoding=utf-8&useSSL=false&serverTimezone=Asia/Shanghai',
  'load-url'='XXXXX:XXXXX',
  'sink.properties.strip_outer_array' = 'true',
  'sink.properties.format' = 'json',
  'database-name' = 'XXXXX',
  'username' = 'XXXXX',
  'password' = 'XXXXX',
  'table-name' = 'test_sr_kafka2redis',
  'sink.buffer-flush.interval-ms' = '60000',
  'sink.parallelism' = '1' 
);

INSERT INTO test_sink.test_sr_kafka2redis
select
  k.kafka_key as sr_key, 
  k.kafka_value as sr_value
from test_source.test_kafka k
left join test_source.test_redis FOR SYSTEM_TIME AS OF k.pt r 
on r.redis_key = concat('XXXXX::', k.kafka_key);
jeff-zou commented 8 months ago

你的项目没引入这个包:
`

org.apache.flink
        <artifactId>flink-table-planner_${scala.binary.version}</artifactId>
        <version>${flink.version}</version>
        <scope>provided</scope>
    </dependency>`
WhiteLie98 commented 8 months ago

你的项目没引入这个包: <dependency> <groupId>org.apache.flink</groupId> <artifactId>flink-table-planner_${scala.binary.version}</artifactId> <version>${flink.version}</version> <scope>provided</scope> </dependency>

谢谢!这个问题确实解决了之前报的错误,但因为之前是有flink-table-planner-loader,这两个包不能同时在lib文件里……

不过,现在又出现了一个找不到的类问题,如图: 企业微信截图_17091931796269

我加入了netty-transport-native-epoll包,但我看编译的时候还有其他的netty包,是都要放进lib环境下吗? image