Open YannMoisan opened 5 years ago
I guess the resolutionHooks
should be passed around here, so that the exclusions can be added to all dependencies that need it via hooks (by calling fetch.dependencies
/ fetch.withDependencies
). So that needs changes in ammonite-spark…
IMO one part of the problem is that ammonite-spark
uses vanilla spark version and its spark-yarn
is linked with hadoop 2.6.5
.
So instead of using transitive version of hadoop, maybe we can let user provide the hadoop/yarn version when creating the spark session.
WDYT?
@darkjh That could be added, yeah.
Good news, it works !
I've built a custom version of almond against ammonite-spark with this PR https://github.com/alexarchambault/ammonite-spark/pull/58.
In the notebook, I've just forced the version of hadoop-client
before importing spark-sql
import coursier.core._
interp.resolutionHooks += { fetch =>
fetch.withResolutionParams(
fetch.resolutionParams.addForceVersion(
(Module(Organization("org.apache.hadoop"), ModuleName("hadoop-client"), Map.empty), "2.8.5")
)
)
}
And I'm now able to read a file from s3 with the following config : EMR 5.24.1 (Hadoop 2.8.5), Spark 2.4.3, Scala 2.12.
@YannMoisan would you be able to provide an example of being able to connect to YARN with almond? I've tried a number of things and can't seem to get it to work with jupyter/almond running on an EMR cluster.
@brayellison we are still using our custom version of almond, based on almond 0.6.3. We tried to bump without success but we haven't spent that much time to investigate the error.
import coursier.core._
interp.resolutionHooks += { fetch =>
fetch.withResolutionParams(
fetch.resolutionParams.addForceVersion(
(Module(Organization("org.apache.hadoop"), ModuleName("hadoop-client"), Map.empty), "2.8.5")
)
)
}
// This @ is necessary for Ammonite to process the `interp.` before continuing
// cf: https://ammonite.io/#Multi-stageScripts
@
import $ivy.`sh.almond::almond-spark:0.6.3-custom-SNAPSHOT`
import $ivy.`org.apache.hadoop:hadoop-aws:2.8.5`
NotebookSparkSession.builder()
.master("yarn")
.getOrCreate()
I missed this before and came back to it searching for a solution again. Thank you @YannMoisan, I'll give it a shot!
I'm trying to use almond with Hadoop 2.8.5 (the hadoop version used by recent EMR) and I ran into an error due to incompatible versions of hadoop jars in the classpath.
It seems that
spark-yarn
has transitive dependencies on hadoop 2.6.5The first idea is to use a profile but unfortunately, there is no hadoop 2.8 profile for Spark.
The second idea is to exclude hadoop jars, it works for the driver but there are still downloaded to the executors.
The third idea is to have a way to exclude jar from the classpath built in
ammonite-spark
, but it doesn't seem possible yet.