databricks / spark-deep-learning

Deep Learning Pipelines for Apache Spark
https://databricks.github.io/spark-deep-learning
Apache License 2.0
1.99k stars 494 forks source link

NoClassDefFoundError: org/apache/spark/ml/util/MLWritable #194

Open ankamv opened 5 years ago

ankamv commented 5 years ago

I'm using Spark 2.4.2 with Anaconda python 3.6.5. I'm getting below error. What's the best way to resolve this?

Command: pyspark --master local[*] --packages databricks:spark-deep-learning:1.5.0-spark2.4-s_2.11

from pyspark.ml.classification import LogisticRegression from pyspark.ml import Pipeline from sparkdl import DeepImageFeaturizer /mnt/conda/lib/python3.6/site-packages/h5py/init.py:36: FutureWarning: Conversion of the second argument of issubdtype from float to np.floating is deprecated. In future, it will be treated as np.float64 == np.dtype(float).type. from ._conv import register_converters as _register_converters Using TensorFlow backend.

featurizer = DeepImageFeaturizer(inputCol="image", outputCol="features", modelName="InceptionV3") Traceback (most recent call last): File "", line 1, in File "/mnt/tmp/spark-3a6fe30a-fc8a-4ece-accc-80033a821db0/userFiles-c4b15186-fd69-41a7-8e50-430045afaeb1/databricks_spark-deep-learning-1.5.0-spark2.4-s_2.11.jar/sparkdl/param/shared_params.py", line 50, in keyword_only File "/mnt/tmp/spark-3a6fe30a-fc8a-4ece-accc-80033a821db0/userFiles-c4b15186-fd69-41a7-8e50-430045afaeb1/databricks_spark-deep-learning-1.5.0-spark2.4-s_2.11.jar/sparkdl/transformers/named_image.py", line 196, in init File "/mnt/spark/python/pyspark/ml/wrapper.py", line 67, in _new_java_obj return java_obj(java_args) File "/mnt/spark/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 1525, in call File "/mnt/spark/python/pyspark/sql/utils.py", line 63, in deco return f(a, **kw) File "/mnt/spark/python/lib/py4j-0.10.7-src.zip/py4j/protocol.py", line 328, in get_return_value py4j.protocol.Py4JJavaError: An error occurred while calling None.com.databricks.sparkdl.DeepImageFeaturizer. : java.lang.NoClassDefFoundError: org/apache/spark/ml/util/MLWritable$class at com.databricks.sparkdl.DeepImageFeaturizer.(DeepImageFeaturizer.scala:35) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:247) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at py4j.Gateway.invoke(Gateway.java:238) at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80) at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69) at py4j.GatewayConnection.run(GatewayConnection.java:238) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.ClassNotFoundException: org.apache.spark.ml.util.MLWritable$class at java.net.URLClassLoader.findClass(URLClassLoader.java:382) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) ... 12 more

zhujiesheng commented 5 years ago

I also have this problem.Have you solved it?

ankamv commented 5 years ago

Nope. I'm not sure what versions are compatible.

On Sat, May 25, 2019, 10:06 AM zhujiesheng notifications@github.com wrote:

I also have this problem.Have you solved it?

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/databricks/spark-deep-learning/issues/194?email_source=notifications&email_token=ABXSMNEJU2ISY7MUCU66XALPXFBVTA5CNFSM4HL3YRJ2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODWHSPGA#issuecomment-495921048, or mute the thread https://github.com/notifications/unsubscribe-auth/ABXSMNDPKUCILCJ6Y5EKTKTPXFBVTANCNFSM4HL3YRJQ .

javadba commented 5 years ago

The docs say spark 2.3.0 . This repo seems to be somewhat poorly maintained so I'd suggest starting with that one .. and then seeing if you can make a patch/PR that provides updated support at spark 2.4.2 level

razorkoo commented 5 years ago

I guess issue is in Scala version, I'm not sure that Scala version you are using but I had the same issue with Scala 2.12. I switched my Scala in 2.11.12 version and it works now

Abhishek-P commented 4 years ago

I am seeing the issue although not in Databrick based pyspark. Rather in normal py spark-nlp mode, but on windows. I have spark-nlp 2.5.5 and Spark 2.4.6.

`I am trying out the ContenxtAwareSpellChecker provided in https://medium.com/spark-nlp/applying-context-aware-spell-checking-in-spark-nlp-3c29c46963bc

The first of the component in the pipeline is a DocumentAssembler

from sparknlp.annotator import *
from sparknlp.base import *
import sparknlp

spark = sparknlp.start()
documentAssembler = DocumentAssembler()\
    .setInputCol("text")\
    .setOutputCol("document")

The above code when run fails as below

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "C:\Users\pab\AppData\Local\Continuum\anaconda3.7\envs\MailChecker\lib\site-packages\pyspark\__init__.py", line 110, in wrapper
    return func(self, **kwargs)
  File "C:\Users\pab\AppData\Local\Continuum\anaconda3.7\envs\MailChecker\lib\site-packages\sparknlp\base.py", line 148, in __init__
    super(DocumentAssembler, self).__init__(classname="com.johnsnowlabs.nlp.DocumentAssembler")
  File "C:\Users\pab\AppData\Local\Continuum\anaconda3.7\envs\MailChecker\lib\site-packages\pyspark\__init__.py", line 110, in wrapper
    return func(self, **kwargs)
  File "C:\Users\pab\AppData\Local\Continuum\anaconda3.7\envs\MailChecker\lib\site-packages\sparknlp\internal.py", line 72, in __init__
    self._java_obj = self._new_java_obj(classname, self.uid)
  File "C:\Users\pab\AppData\Local\Continuum\anaconda3.7\envs\MailChecker\lib\site-packages\pyspark\ml\wrapper.py", line 69, in _new_java_obj
    return java_obj(*java_args)
  File "C:\Users\pab\AppData\Local\Continuum\anaconda3.7\envs\MailChecker\lib\site-packages\pyspark\python\lib\py4j-0.10.9-src.zip\py4j\java_gateway.py", line 1569, in __call__
  File "C:\Users\pab\AppData\Local\Continuum\anaconda3.7\envs\MailChecker\lib\site-packages\pyspark\sql\utils.py", line 131, in deco
    return f(*a, **kw)
  File "C:\Users\pab\AppData\Local\Continuum\anaconda3.7\envs\MailChecker\lib\site-packages\pyspark\python\lib\py4j-0.10.9-src.zip\py4j\protocol.py", line 328, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling None.com.johnsnowlabs.nlp.DocumentAssembler.
: java.lang.NoClassDefFoundError: org/apache/spark/ml/util/MLWritable$class
        at com.johnsnowlabs.nlp.DocumentAssembler.<init>(DocumentAssembler.scala:16)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:247)
        at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
        at py4j.Gateway.invoke(Gateway.java:238)
        at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80)
        at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69)
        at py4j.GatewayConnection.run(GatewayConnection.java:238)
        at java.lang.Thread.run(Thread.java:748)

`

aishwarya-agrawal commented 3 years ago

I am getting a similar issue on ubuntu 16.04 with pyspark

import pyspark
spark = pyspark.sql.SparkSession.builder.appName("MyApp") \ 
 .config("spark.jars.packages", "com.microsoft.ml.spark:mmlspark_2.11:1.0.0-rc2") \ 
 .config("spark.jars.repositories", "https://mmlspark.azureedge.net/maven") \ 
 .getOrCreate()
from mmlspark.lightgbm import LightGBMClassifier
LightGBMClassifier()

Gives me following error

---------------------------------------------------------------------------
Py4JJavaError                             Traceback (most recent call last)
<ipython-input-5-f9c170eb7541> in <module>
----> 1 LightGBMClassifier()

~/anaconda3/envs/cl_susp_env/lib/python3.6/site-packages/pyspark/__init__.py in wrapper(self, *args, **kwargs)
    108             raise TypeError("Method %s forces keyword arguments." % func.__name__)
    109         self._input_kwargs = kwargs
--> 110         return func(self, **kwargs)
    111     return wrapper
    112 

/tmp/spark-3fd59146-3144-47ee-95cc-54bae1c707a4/userFiles-a0a10de8-4ea8-4972-81cf-32431d7f12e9/com.microsoft.ml.spark_mmlspark_2.11-1.0.0-rc2.jar/mmlspark/lightgbm/_LightGBMClassifier.py in __init__(self, baggingFraction, baggingFreq, baggingSeed, binSampleCount, boostFromAverage, boostingType, categoricalSlotIndexes, categoricalSlotNames, defaultListenPort, driverListenPort, earlyStoppingRound, featureFraction, featuresCol, featuresShapCol, improvementTolerance, initScoreCol, isProvideTrainingMetric, isUnbalance, labelCol, lambdaL1, lambdaL2, leafPredictionCol, learningRate, maxBin, maxBinByFeature, maxDeltaStep, maxDepth, metric, minDataInLeaf, minGainToSplit, minSumHessianInLeaf, modelString, negBaggingFraction, numBatches, numIterations, numLeaves, numTasks, objective, parallelism, posBaggingFraction, predictionCol, probabilityCol, rawPredictionCol, repartitionByGroupingColumn, slotNames, thresholds, timeout, topK, useBarrierExecutionMode, validationIndicatorCol, verbosity, weightCol)
     81     def __init__(self, baggingFraction=1.0, baggingFreq=0, baggingSeed=3, binSampleCount=200000, boostFromAverage=True, boostingType="gbdt", categoricalSlotIndexes=[], categoricalSlotNames=[], defaultListenPort=12400, driverListenPort=0, earlyStoppingRound=0, featureFraction=1.0, featuresCol="features", featuresShapCol="", improvementTolerance=0.0, initScoreCol=None, isProvideTrainingMetric=False, isUnbalance=False, labelCol="label", lambdaL1=0.0, lambdaL2=0.0, leafPredictionCol="", learningRate=0.1, maxBin=255, maxBinByFeature=[], maxDeltaStep=0.0, maxDepth=-1, metric="", minDataInLeaf=20, minGainToSplit=0.0, minSumHessianInLeaf=0.001, modelString="", negBaggingFraction=1.0, numBatches=0, numIterations=100, numLeaves=31, numTasks=0, objective="binary", parallelism="data_parallel", posBaggingFraction=1.0, predictionCol="prediction", probabilityCol="probability", rawPredictionCol="rawPrediction", repartitionByGroupingColumn=True, slotNames=[], thresholds=None, timeout=1200.0, topK=20, useBarrierExecutionMode=False, validationIndicatorCol=None, verbosity=1, weightCol=None):
     82         super(_LightGBMClassifier, self).__init__()
---> 83         self._java_obj = self._new_java_obj("com.microsoft.ml.spark.lightgbm.LightGBMClassifier")
     84         self.baggingFraction = Param(self, "baggingFraction", "baggingFraction: Bagging fraction (default: 1.0)")
     85         self._setDefault(baggingFraction=1.0)

~/anaconda3/envs/cl_susp_env/lib/python3.6/site-packages/pyspark/ml/wrapper.py in _new_java_obj(java_class, *args)
     67             java_obj = getattr(java_obj, name)
     68         java_args = [_py2java(sc, arg) for arg in args]
---> 69         return java_obj(*java_args)
     70 
     71     @staticmethod

~/anaconda3/envs/cl_susp_env/lib/python3.6/site-packages/py4j/java_gateway.py in __call__(self, *args)
   1567         answer = self._gateway_client.send_command(command)
   1568         return_value = get_return_value(
-> 1569             answer, self._gateway_client, None, self._fqn)
   1570 
   1571         for temp_arg in temp_args:

~/anaconda3/envs/cl_susp_env/lib/python3.6/site-packages/pyspark/sql/utils.py in deco(*a, **kw)
    129     def deco(*a, **kw):
    130         try:
--> 131             return f(*a, **kw)
    132         except py4j.protocol.Py4JJavaError as e:
    133             converted = convert_exception(e.java_exception)

~/anaconda3/envs/cl_susp_env/lib/python3.6/site-packages/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
    326                 raise Py4JJavaError(
    327                     "An error occurred while calling {0}{1}{2}.\n".
--> 328                     format(target_id, ".", name), value)
    329             else:
    330                 raise Py4JError(

Py4JJavaError: An error occurred while calling None.com.microsoft.ml.spark.lightgbm.LightGBMClassifier.
: java.lang.NoClassDefFoundError: org/apache/spark/ml/util/MLWritable$class
        at com.microsoft.ml.spark.lightgbm.LightGBMClassifier.<init>(LightGBMClassifier.scala:27)
        at com.microsoft.ml.spark.lightgbm.LightGBMClassifier.<init>(LightGBMClassifier.scala:30)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:247)
        at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
        at py4j.Gateway.invoke(Gateway.java:238)
        at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80)
        at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69)
        at py4j.GatewayConnection.run(GatewayConnection.java:238)
        at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ClassNotFoundException: org.apache.spark.ml.util.MLWritable$class
        at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
        ... 13 more

Environment: ubuntu 16.04 python 3.6 pyspark 3.0.0

khaarthikm commented 3 years ago

Issue is still open. Unfortunately SPARK 3.0 is not compatible with mmlspark

VPisanoCM commented 2 years ago

I think this is still open. Databricks doesn't offer any SPARK versions less than 2.4.3

madhuyadu commented 2 years ago

mmlspark not compatible with Spark 3.0?