titicaca / spark-iforest

Isolation Forest on Spark
Apache License 2.0
227 stars 89 forks source link

"Task not serializable" when loading trained model #37

Closed koresss closed 2 years ago

koresss commented 2 years ago

Hello, I have saved a trained model and upon trying to load it I get the following error: https://pastebin.com/raw/jG2BRSwV

`Traceback (most recent call last): File "/home/orestisk/Downloads/Netflow pipeline/distributed_netflow_inference.py", line 308, in runPipeline(df) File "/home/orestisk/Downloads/Netflow pipeline/distributed_netflow_inference.py", line 168, in runPipeline model=IForestModel.load('spark_netflow_pickled_files/iforest') File "/home/orestisk/Downloads/venv/lib/python3.8/site-packages/pyspark/python/lib/pyspark.zip/pyspark/ml/util.py", line 332, in load File "/home/orestisk/Downloads/venv/lib/python3.8/site-packages/pyspark_iforest/ml/iforest.py", line 95, in load java_obj = self._jread.load(path) File "/home/orestisk/Downloads/venv/lib/python3.8/site-packages/pyspark/python/lib/py4j-0.10.9.2-src.zip/py4j/java_gateway.py", line 1309, in call File "/home/orestisk/Downloads/venv/lib/python3.8/site-packages/pyspark/python/lib/pyspark.zip/pyspark/sql/utils.py", line 111, in deco File "/home/orestisk/Downloads/venv/lib/python3.8/site-packages/pyspark/python/lib/py4j-0.10.9.2-src.zip/py4j/protocol.py", line 326, in get_return_value py4j.protocol.Py4JJavaError: An error occurred while calling o629.load. : org.apache.spark.SparkException: Task not serializable at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:416) at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:406) at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:162) at org.apache.spark.SparkContext.clean(SparkContext.scala:2477) at org.apache.spark.rdd.RDD.$anonfun$map$1(RDD.scala:422) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112) at org.apache.spark.rdd.RDD.withScope(RDD.scala:414) at org.apache.spark.rdd.RDD.map(RDD.scala:421) at org.apache.spark.ml.iforest.IForestModel$.org$apache$spark$ml$iforest$IForestModel$$loadTreeNodes(IForest.scala:245) at org.apache.spark.ml.iforest.IForestModel$IForestModelReader.load(IForest.scala:305) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at py4j.Gateway.invoke(Gateway.java:282) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182) at py4j.ClientServerConnection.run(ClientServerConnection.java:106) at java.lang.Thread.run(Thread.java:748) Caused by: java.io.NotSerializableException: org.apache.log4j.Logger Serialization stack:

Is there some workaround to this? The README Python API example also fails on IForestModel.load. I am using spark 3.2, here is also my pom.xml https://pastebin.com/raw/excYS1v6

Edit: I tried spark 3.0 and I'm getting the same error

koresss commented 2 years ago

I commented out log4j from the scala code entirely and now it loads properly.