thinline72 / nsl-kdd

PySpark solution to the NSL-KDD dataset: https://www.unb.ca/cic/datasets/nsl.html
Apache License 2.0
117 stars 58 forks source link

Error with label_mapping_model #4

Closed rahul2307 closed 4 years ago

rahul2307 commented 6 years ago

I am using python 3.6 while running your project I am having an error in the label_mapping_model as PY4jjavaerror.Below is the error report for your reference

Py4JJavaError Traceback (most recent call last)

in () 4 5 # Fitting preparation pipeline ----> 6 labels_mapping_model = labels_mapping_pipeline.fit(train_df) 7 8 # Transforming labels column and adding id column ~\Anaconda3\lib\site-packages\pyspark\ml\base.py in fit(self, dataset, params) 130 return self.copy(params)._fit(dataset) 131 else: --> 132 return self._fit(dataset) 133 else: 134 raise ValueError("Params must be either a param map or a list/tuple of param maps, " ~\Anaconda3\lib\site-packages\pyspark\ml\pipeline.py in _fit(self, dataset) 107 dataset = stage.transform(dataset) 108 else: # must be an Estimator --> 109 model = stage.fit(dataset) 110 transformers.append(model) 111 if i < indexOfLastEstimator: ~\Anaconda3\lib\site-packages\pyspark\ml\base.py in fit(self, dataset, params) 130 return self.copy(params)._fit(dataset) 131 else: --> 132 return self._fit(dataset) 133 else: 134 raise ValueError("Params must be either a param map or a list/tuple of param maps, " ~\Anaconda3\lib\site-packages\pyspark\ml\wrapper.py in _fit(self, dataset) 286 287 def _fit(self, dataset): --> 288 java_model = self._fit_java(dataset) 289 model = self._create_model(java_model) 290 return self._copyValues(model) ~\Anaconda3\lib\site-packages\pyspark\ml\wrapper.py in _fit_java(self, dataset) 283 """ 284 self._transfer_params_to_java() --> 285 return self._java_obj.fit(dataset._jdf) 286 287 def _fit(self, dataset): ~\Anaconda3\lib\site-packages\py4j\java_gateway.py in __call__(self, *args) 1158 answer = self.gateway_client.send_command(command) 1159 return_value = get_return_value( -> 1160 answer, self.gateway_client, self.target_id, self.name) 1161 1162 for temp_arg in temp_args: ~\Anaconda3\lib\site-packages\pyspark\sql\utils.py in deco(*a, **kw) 61 def deco(*a, **kw): 62 try: ---> 63 return f(*a, **kw) 64 except py4j.protocol.Py4JJavaError as e: 65 s = e.java_exception.toString() ~\Anaconda3\lib\site-packages\py4j\protocol.py in get_return_value(answer, gateway_client, target_id, name) 318 raise Py4JJavaError( 319 "An error occurred while calling {0}{1}{2}.\n". --> 320 format(target_id, ".", name), value) 321 else: 322 raise Py4JError( Py4JJavaError: An error occurred while calling o758.fit. : java.lang.IllegalArgumentException at org.apache.xbean.asm5.ClassReader.(Unknown Source) at org.apache.xbean.asm5.ClassReader.(Unknown Source) at org.apache.xbean.asm5.ClassReader.(Unknown Source) at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:46) at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:449) at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:432) at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733) at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103) at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:103) at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230) at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40) at scala.collection.mutable.HashMap$$anon$1.foreach(HashMap.scala:103) at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732) at org.apache.spark.util.FieldAccessFinder$$anon$3.visitMethodInsn(ClosureCleaner.scala:432) at org.apache.xbean.asm5.ClassReader.a(Unknown Source) at org.apache.xbean.asm5.ClassReader.b(Unknown Source) at org.apache.xbean.asm5.ClassReader.accept(Unknown Source) at org.apache.xbean.asm5.ClassReader.accept(Unknown Source) at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:262) at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:261) at scala.collection.immutable.List.foreach(List.scala:381) at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:261) at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:159) at org.apache.spark.SparkContext.clean(SparkContext.scala:2292) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2066) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2092) at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:939) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112) at org.apache.spark.rdd.RDD.withScope(RDD.scala:363) at org.apache.spark.rdd.RDD.collect(RDD.scala:938) at org.apache.spark.rdd.PairRDDFunctions$$anonfun$countByKey$1.apply(PairRDDFunctions.scala:370) at org.apache.spark.rdd.PairRDDFunctions$$anonfun$countByKey$1.apply(PairRDDFunctions.scala:370) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112) at org.apache.spark.rdd.RDD.withScope(RDD.scala:363) at org.apache.spark.rdd.PairRDDFunctions.countByKey(PairRDDFunctions.scala:369) at org.apache.spark.rdd.RDD$$anonfun$countByValue$1.apply(RDD.scala:1208) at org.apache.spark.rdd.RDD$$anonfun$countByValue$1.apply(RDD.scala:1208) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112) at org.apache.spark.rdd.RDD.withScope(RDD.scala:363) at org.apache.spark.rdd.RDD.countByValue(RDD.scala:1207) at org.apache.spark.ml.feature.StringIndexer.fit(StringIndexer.scala:140) at org.apache.spark.ml.feature.StringIndexer.fit(StringIndexer.scala:109) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.base/java.lang.reflect.Method.invoke(Unknown Source) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at py4j.Gateway.invoke(Gateway.java:282) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.GatewayConnection.run(GatewayConnection.java:214) at java.base/java.lang.Thread.run(Unknown Source)
Smikha commented 6 years ago

Hey, I'm getting the same error. Did you find a work around for this?

abirnedia commented 6 years ago

Hey, me too I'm getting the same error. Did you find a work around for this? or can you help us plz !!

thinline72 commented 4 years ago

Sorry for answering 2 year later 😂

Finally got some time to provide an easier way to run a docker container via make nsl-kdd-pyspark command with the lates pyspark installed. I've run the notebook end-to-end and haven't faced any issues, so I'd advice to just use docker container.