almond-sh / almond

A Scala kernel for Jupyter
https://almond.sh
BSD 3-Clause "New" or "Revised" License
1.6k stars 238 forks source link

How to use almond with Hadoop 2.8 #397

Open YannMoisan opened 5 years ago

YannMoisan commented 5 years ago

I'm trying to use almond with Hadoop 2.8.5 (the hadoop version used by recent EMR) and I ran into an error due to incompatible versions of hadoop jars in the classpath.

19/07/23 16:09:47 WARN TaskSetManager: Lost task 1.0 in stage 0.0 (TID 1, ip-10-20-101-239.eu-west-1.compute.internal, executor 2): java.lang.IllegalAccessError: tried to access method org.apache.hadoop.metrics2.lib.MutableCounterLong.<init>(Lorg/apache/hadoop/metrics2/MetricsInfo;J)V from class org.apache.hadoop.fs.s3a.S3AInstrumentation
    at org.apache.hadoop.fs.s3a.S3AInstrumentation.streamCounter(S3AInstrumentation.java:195)

It seems that spark-yarn has transitive dependencies on hadoop 2.6.5

The first idea is to use a profile but unfortunately, there is no hadoop 2.8 profile for Spark.

The second idea is to exclude hadoop jars, it works for the driver but there are still downloaded to the executors.

interp.load.ivy(
  coursier.Dependency(
    module = coursier.Module(coursier.Organization("org.apache.spark"), coursier.ModuleName("spark-yarn_2.11")),
    version = "2.4.3",
    exclusions = Set((coursier.Organization("org.apache.hadoop"), coursier.ModuleName("*")))
  )
)
import $ivy.`sh.almond::almond-spark:0.6.0`
import $ivy.`org.apache.hadoop:hadoop-aws:2.8.5`
import $ivy.`org.apache.hadoop:hadoop-hdfs-client:2.8.5`
import $ivy.`org.apache.hadoop:hadoop-hdfs:2.8.5`
import $ivy.`org.apache.hadoop:hadoop-yarn-api:2.8.5`
import $ivy.`org.apache.hadoop:hadoop-yarn-client:2.8.5`
import $ivy.`org.apache.hadoop:hadoop-mapreduce-client-core:2.8.5`
import $ivy.`org.apache.hadoop:hadoop-yarn-server-web-proxy:2.8.5`
import $ivy.`org.apache.hadoop:hadoop-yarn-common:2.8.5`

The third idea is to have a way to exclude jar from the classpath built in ammonite-spark, but it doesn't seem possible yet.

alexarchambault commented 5 years ago

I guess the resolutionHooks should be passed around here, so that the exclusions can be added to all dependencies that need it via hooks (by calling fetch.dependencies / fetch.withDependencies). So that needs changes in ammonite-spark…

darkjh commented 5 years ago

IMO one part of the problem is that ammonite-spark uses vanilla spark version and its spark-yarn is linked with hadoop 2.6.5. So instead of using transitive version of hadoop, maybe we can let user provide the hadoop/yarn version when creating the spark session. WDYT?

alexarchambault commented 5 years ago

@darkjh That could be added, yeah.

YannMoisan commented 5 years ago

Good news, it works ! I've built a custom version of almond against ammonite-spark with this PR https://github.com/alexarchambault/ammonite-spark/pull/58. In the notebook, I've just forced the version of hadoop-client before importing spark-sql

import coursier.core._
interp.resolutionHooks += { fetch =>
   fetch.withResolutionParams(
     fetch.resolutionParams.addForceVersion(
         (Module(Organization("org.apache.hadoop"), ModuleName("hadoop-client"), Map.empty), "2.8.5")
       )
       )
 }

And I'm now able to read a file from s3 with the following config : EMR 5.24.1 (Hadoop 2.8.5), Spark 2.4.3, Scala 2.12.

brayellison commented 4 years ago

@YannMoisan would you be able to provide an example of being able to connect to YARN with almond? I've tried a number of things and can't seem to get it to work with jupyter/almond running on an EMR cluster.

YannMoisan commented 4 years ago

@brayellison we are still using our custom version of almond, based on almond 0.6.3. We tried to bump without success but we haven't spent that much time to investigate the error.

import coursier.core._
interp.resolutionHooks += { fetch =>
  fetch.withResolutionParams(
    fetch.resolutionParams.addForceVersion(
      (Module(Organization("org.apache.hadoop"), ModuleName("hadoop-client"), Map.empty), "2.8.5")
    )
  )
}

// This @ is necessary for Ammonite to process the `interp.` before continuing
// cf: https://ammonite.io/#Multi-stageScripts
@

import $ivy.`sh.almond::almond-spark:0.6.3-custom-SNAPSHOT`
import $ivy.`org.apache.hadoop:hadoop-aws:2.8.5`

    NotebookSparkSession.builder()
      .master("yarn")
      .getOrCreate()
brayellison commented 4 years ago

I missed this before and came back to it searching for a solution again. Thank you @YannMoisan, I'll give it a shot!