apache / seatunnel

SeaTunnel is a next-generation super high-performance, distributed, massive data integration tool.
https://seatunnel.apache.org/
Apache License 2.0
7.82k stars 1.76k forks source link

jar包冲突了,需要重新打包waterdrop-core-2.0.4-2.11.8.jar么? #610

Closed ruizhang81 closed 1 year ago

ruizhang81 commented 3 years ago

Waterdrop Version(2.0.4-2.11.8.)

Flink or Spark Version(SPARK2-2.2.0)

Java or Scala Version(Java1.8)

Waterdrop Config(Waterdrop配置文件)

Please delete sensitive information(请注意删除敏感信息)

env {
  spark.app.name = "hive-ck"
  spark.executor.instances = 8
  spark.executor.cores = 2
  spark.executor.memory = "2g"
}

source {
  Fake {
    result_table_name = "phone_idcard"
  }
}

transform {

}

sink {
  clickhouse {
    host = "172.17.6.48:8123"
    database = "mingzhi_808"
    clickhouse.socket_timeout=600000
     table = "phone_idcard"
    username = "test"
    password = "123"
    bulk_size = 50000
    retry = 3
  }
}

Running Command(启动命令)

./bin/start-waterdrop-spark.sh --master yarn --deploy-mode client -c ./config/test.conf

Error Exception

Exception in thread "main" java.lang.NoSuchMethodError: org.codehaus.commons.compiler.Location.(Ljava/lang/String;III)V at org.codehaus.janino.Scanner.location(Scanner.java:233) at org.codehaus.janino.Parser.location(Parser.java:3135) at org.codehaus.janino.Parser.parseImportDeclarationBody(Parser.java:275) at org.codehaus.janino.ClassBodyEvaluator.makeCompilationUnit(ClassBodyEvaluator.java:258)

Screenshots 11

RickyHuo commented 3 years ago

@ruizhang81 请根据模版提供详细信息 参考https://github.com/InterestingLab/waterdrop/issues/new?assignees=&labels=&template=----------issue.md&title=

ruizhang81 commented 3 years ago

修改了

RickyHuo commented 3 years ago

@ruizhang81 试一下1.5.1 的版本

ruizhang81 commented 3 years ago

现在是另一个错了 21/02/07 15:33:47 INFO batch.Clickhouse: insert into phone_idcard (idcard,phone,score) values (?,?,?) Exception in thread "main" java.lang.NoSuchMethodError: org.apache.hive.common.util.HiveStringUtils.joinIgnoringEmpty([Ljava/lang/String;C)Ljava/lang/String; at org.apache.hadoop.hive.serde2.ColumnProjectionUtils.appendReadColumns(ColumnProjectionUtils.java:99) at org.apache.spark.sql.hive.HiveShim$.appendReadColumns(HiveShim.scala:76) at org.apache.spark.sql.hive.execution.HiveTableScanExec.addColumnMetadataToConf(HiveTableScanExec.scala:119)