samelamin / spark-bigquery

Google BigQuery support for Spark, Structured Streaming, SQL, and DataFrames with easy Databricks integration.
Apache License 2.0
70 stars 28 forks source link

Using spark-bigquery connector in AWS EMR Zeppelin #60

Closed Jeeva-Ganesan closed 6 years ago

Jeeva-Ganesan commented 6 years ago

Hi.

I am trying to use this connector in AWS EMR Cluster. I just downloaded the jar file from here - https://mvnrepository.com/artifact/com.github.samelamin/spark-bigquery_2.11/0.2.4 and placed it in /usr/lib/spark/jars folder

Then trying to use this in zeppelin notebook spark interpreter.

import com.samelamin.spark.bigquery._

// Set up GCP credentials
sqlContext.setGcpJsonKeyFile("/home/json/google_api_credentials.json")

// Set up BigQuery project and bucket
sqlContext.setBigQueryProjectId("data-1349")
//sqlContext.setBigQueryGcsBucket("<GCS_BUCKET>")

// Set up BigQuery dataset location, default is US
sqlContext.setBigQueryDatasetLocation("US")

And this is the error I am getting, can you please help with this.

java.lang.NoClassDefFoundError: com/google/api/client/http/HttpRequestInitializer
  at com.samelamin.spark.bigquery.BigQuerySQLContext.bq$lzycompute(BigQuerySQLContext.scala:19)
  at com.samelamin.spark.bigquery.BigQuerySQLContext.bq(BigQuerySQLContext.scala:19)
  at com.samelamin.spark.bigquery.BigQuerySQLContext.setBigQueryDatasetLocation(BigQuerySQLContext.scala:69)
  ... 59 elided
Caused by: java.lang.ClassNotFoundException: com.google.api.client.http.HttpRequestInitializer
  at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
  at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
  at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
  at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
  ... 62 more
samelamin commented 6 years ago

Hi

Looks like you need to create an uber jar because the connector on its own needs the google client. Most clusters already have that but sounds like you need to load it onto zeppelin specifically

I would suggest creating a fat jar and seeing if you can get spark to run a sample application on the EMR, once that works it should be simpler to port it to zeppelin

Jeeva-Ganesan commented 6 years ago

Hi. I created an uber jar with google client library using sbt.

libraryDependencies ++= {
  val sparkVer = "2.2.1"
  val sparkbqVer = "0.2.4"
  Seq(
    "org.apache.spark" %% "spark-core" % sparkVer % "compile" withSources(),
    "org.apache.spark" %% "spark-sql" % sparkVer % "provided", //% "compile" withSources(),
    "org.apache.spark" %% "spark-hive" % sparkVer, //% "provided" withSources(),
    "com.github.samelamin" %% "spark-bigquery" % sparkbqVer,
    "com.google.api-client" % "google-api-client" % "1.23.0"
  )
}

This is the error I am getting when i submit my spark job with spark submit command ,

Exception in thread "main" java.lang.NoSuchMethodError: com.google.common.base.Preconditions.checkArgument(ZLjava/lang/String;Ljava/lang/Object;Ljava/lang/Object;)V
        at com.google.cloud.hadoop.io.bigquery.BigQueryStrings.parseTableReference(BigQueryStrings.java:68)
        at com.samelamin.spark.bigquery.BigQueryRelation.getConvertedSchema(BigQueryRelation.scala:19)
        at com.samelamin.spark.bigquery.BigQueryRelation.schema(BigQueryRelation.scala:13)
        at org.apache.spark.sql.execution.datasources.LogicalRelation$.apply(LogicalRelation.scala:77)
        at org.apache.spark.sql.SparkSession.baseRelationToDataFrame(SparkSession.scala:424)
        at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:172)
        at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:146)
        at dataload.pull_gbq_data$.main(pull_gbq_data.scala:18)
        at dataload.pull_gbq_data.main(pull_gbq_data.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:775)
        at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:119)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
samelamin commented 6 years ago

That is a different error @Jeeva-Ganesan and it has to do with Guava

You need to shade it into the uber jar, if you google it you will see what I mean

I think anything about guava 18 should fix it

Jeeva-Ganesan commented 6 years ago

Yes. I tried that, I changed my build file like this,

assemblyShadeRules in assembly := Seq(
ShadeRule.rename("com.google.guava.**" -> "my_conf.@1")
    .inLibrary("com.google.guava" % "config" % "23.6")
    .inProject
)

libraryDependencies ++= {
  val sparkVer = "2.2.1"
  val sparkbqVer = "0.2.4"
  Seq(
    "org.apache.spark" %% "spark-core" % sparkVer % "provided", //compile" withSources(),
    "org.apache.spark" %% "spark-sql" % sparkVer % "provided", //% "compile" withSources(),
    "org.apache.spark" %% "spark-hive" % sparkVer, //% "provided" withSources(),
    "com.github.samelamin" %% "spark-bigquery" % sparkbqVer % "compile",
    "com.google.api-client" % "google-api-client" % "1.23.0" % "compile",
    "com.google.guava" % "guava" % "23.6"
  )
}

Still got the error, ended up downloading latest guava jar and placing it in spark jars folder (deleting the existing one). Then it worked.