Qihoo360 / Quicksql

A Flexible, Fast, Federated(3F) SQL Analysis Middleware for Multiple Data Sources
https://quicksql.readthedocs.io
MIT License
2.06k stars 584 forks source link

当我调试example时候,报错 java.lang.NoSuchMethodError: com.fasterxml.jackson.databind.JsonMappingException #270

Open xza-m opened 2 years ago

xza-m commented 2 years ago

Versions

eg: 0.7.0

Describe the bug

我按照文档设置了env环境变量,但是当我运行 ./bin/quicksql-example.sh --class com.qihoo.qsql.CsvScanExample --runner spark 报错

To Reproduce

Expected behavior

Actual behavior

Full Output Logs

Exception in thread "main" java.lang.NoSuchMethodError: com.fasterxml.jackson.databind.JsonMappingException.(Ljava/io/Closeable;Ljava/lang/String;)V at com.fasterxml.jackson.module.scala.JacksonModule$class.setupModule(JacksonModule.scala:61) at com.fasterxml.jackson.module.scala.DefaultScalaModule.setupModule(DefaultScalaModule.scala:17) at com.fasterxml.jackson.databind.ObjectMapper.registerModule(ObjectMapper.java:718) at org.apache.spark.rdd.RDDOperationScope$.(RDDOperationScope.scala:82) at org.apache.spark.rdd.RDDOperationScope$.(RDDOperationScope.scala) at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152) at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127) at org.apache.spark.sql.execution.SparkPlan.getByteArrayRdd(SparkPlan.scala:247) at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:339) at org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:38) at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collectFromPlan(Dataset.scala:3383) at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2544) at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2544) at org.apache.spark.sql.Dataset$$anonfun$53.apply(Dataset.scala:3364) at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:78) at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73) at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3363) at org.apache.spark.sql.Dataset.head(Dataset.scala:2544) at org.apache.spark.sql.Dataset.take(Dataset.scala:2758) at org.apache.spark.sql.execution.datasources.csv.TextInputCSVDataSource$.infer(CSVDataSource.scala:232) at org.apache.spark.sql.execution.datasources.csv.CSVDataSource.inferSchema(CSVDataSource.scala:68) at org.apache.spark.sql.execution.datasources.csv.CSVFileFormat.inferSchema(CSVFileFormat.scala:63) at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$6.apply(DataSource.scala:179) at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$6.apply(DataSource.scala:179) at scala.Option.orElse(Option.scala:289) at org.apache.spark.sql.execution.datasources.DataSource.getOrInferFileFormatSchema(DataSource.scala:178) at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:372) at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:223) at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:211) at org.apache.spark.sql.DataFrameReader.csv(DataFrameReader.scala:615) at org.apache.spark.sql.DataFrameReader.csv(DataFrameReader.scala:467) at Requirement25801.execute(Requirement25801.java:29) at com.qihoo.qsql.exec.spark.SparkPipeline.show(SparkPipeline.java:87) at com.qihoo.qsql.CsvScanExample.main(CsvScanExample.java:22)

Additional context

我查看了jackson版本都为默认的2.6.5,我需要怎么做