cerndb / dist-keras

Distributed Deep Learning, with a focus on distributed training, using Keras and Apache Spark.
http://joerihermans.com/work/distributed-keras/
GNU General Public License v3.0
623 stars 169 forks source link

/examples/workflow.ipynb throws "NameError: name 'SparkSession' is not defined" for spark 2.0+ #73

Closed reedv closed 6 years ago

reedv commented 6 years ago

System: OS: CentOS7 spark version: 2.1.0 py4j version: py4j-0.10.4 installed via pip install -e ., see https://github.com/cerndb/dist-keras#git--pip

Trying to run the example workflow, the section of the notebook that looks like

conf = SparkConf()
conf.set("spark.app.name", application_name)
conf.set("spark.master", master)
conf.set("spark.executor.cores", `num_cores`)
conf.set("spark.executor.instances", `num_executors`)
conf.set("spark.locality.wait", "0")
conf.set("spark.serializer", "org.apache.spark.serializer.KryoSerializer");

# Check if the user is running Spark 2.0 +
if using_spark_2:
    sc = SparkSession.builder.config(conf=conf) \
            .appName(application_name) \
            .getOrCreate()
else:
    # Create the Spark context.
    sc = SparkContext(conf=conf)
    # Add the missing imports
    from pyspark import SQLContext
    sqlContext = SQLContext(sc)

throws the error

NameError: name 'SparkSession' is not defined

reedv commented 6 years ago

Even when defining a SparkContext beforehand (as recommended here: https://stackoverflow.com/a/49832046/8236733).

Relevant code snippet is

    conf = SparkConf()
    conf.set("spark.app.name", application_name)
    conf.set("spark.master", master)
    conf.set("spark.executor.cores", `num_cores`)
    conf.set("spark.executor.instances", `num_executors`)
    conf.set("spark.locality.wait", "0")
    conf.set("spark.serializer", "org.apache.spark.serializer.KryoSerializer");

    # Check if the user is running Spark 2.0 +
    if using_spark_2:
    #     sc = SparkSession.builder.config(conf=conf) \
    #             .appName(application_name) \
    #             .getOrCreate()
    #     print sc.version
          sc = SparkContext(conf=conf)
          print sc.version
          ss = SparkSession(sc).appName(application_name).getOrCreate()

which generates the output

    2.1.0-mapr-1710

    ---------------------------------------------------------------------------
    NameError                                 Traceback (most recent call last)
    <ipython-input-8-5e35501cdf3c> in <module>()
         15       sc = SparkContext(conf=conf)
         16       print sc.version
    ---> 17       ss = SparkSession(sc).appName(application_name).getOrCreate()
         18       print ss.version
         19 else:

    NameError: name 'SparkSession' is not defined

Never used pyspark before and very confused since other docs/articles I have seen seem to indicate that initializinng a SparkContext was not needed to use SparkSession in spark2 (eg. here https://www.cloudera.com/documentation/data-science-workbench/latest/topics/cdsw_pyspark.html#pyspark_setup__local_mode, here https://databricks.com/blog/2016/08/15/how-to-use-sparksession-in-apache-spark-2-0.html, or here https://sparkour.urizone.net/recipes/understanding-sparksession/#toc), but this did not work (as can be seen in the commented code above). Note my environment variables look like:

    import os
    print os.environ['SPARK_HOME']
    print os.environ['PYTHONPATH']
    # since I'm using MapR hadoop and require a security ticket
    os.environ['MAPR_TICKETFILE_LOCATION'] = "/tmp/maprticket_10003"

    #output
    /opt/mapr/spark/spark-2.1.0
    /opt/mapr/spark/spark-2.1.0/python/:/opt/mapr/spark/spark-2.1.0/python/lib/py4j-0.10.4-src.zip:/opt/mapr/spark/spark-2.1.0/python/:/opt/mapr/spark/spark-2.1.0/python/lib/py4j-0.10.4-src.zip:/opt/mapr/spark/spark-2.1.0/python/:/opt/mapr/spark/spark-2.1.0/python/lib/py4j-0.10.4-src.zip:/opt/mapr/spark/spark-2.1.0/python/:/opt/mapr/spark/spark-2.1.0/python/lib/py4j-0.10.4-src.zip:

What does seem to work is...

    conf = SparkConf()
    conf.set("spark.app.name", application_name)
    conf.set("spark.master", master)
    conf.set("spark.executor.cores", `num_cores`)
    conf.set("spark.executor.instances", `num_executors`)
    conf.set("spark.locality.wait", "0")
    conf.set("spark.serializer", "org.apache.spark.serializer.KryoSerializer");

    # Check if the user is running Spark 2.0 +
    if using_spark_2:
         from pyspark.sql import SparkSession
         sc = SparkSession.builder.config(conf=conf) \
                 .appName(application_name) \
                 .getOrCreate()
         print sc.version

whereas the pyspark related imports were just

from pyspark import SparkContext
from pyspark import SparkConf
from pyspark.ml.feature import StandardScaler
from pyspark.ml.feature import VectorAssembler
from pyspark.ml.feature import StringIndexer
from pyspark.ml.evaluation import MulticlassClassificationEvaluator
from pyspark.mllib.evaluation import BinaryClassificationMetrics

Again, never used spark-anything before, so the fact that the original code did not import that package that seems to make things work here makes me question whether I'm using the right kind of SparkSession. Is using pyspark.sql package to right thing to do or should I just be expecting SparkSession to be valid after creating a SparkContext (without having to import anything other than the original imports)?

Any explaination as to what is going on here would be appreciated.

** The full original code that I am trying to get to work can be found here: https://github.com/cerndb/dist-keras/blob/master/examples/workflow.ipynb