Open anigmo97 opened 7 months ago
Step 1: Create a Spark session spark = SparkSession.builder\ .config('spark.jars.packages', 'ml.combust.mleap:mleap-spark_2.12:0.19.0') \ .getOrCreate()
Your example is using mleap 0.19.0. Does this go away if you use the latest version? Also note that v0.23.1 is tested against Spark 3.4. I'd suspect it still works with Spark 3.3, but untested/unsupported.
Hello @jsleight, You're right I used the jar of the v0.19.0 by mistake.
I have tested using the correct jar:
spark = SparkSession.builder\ .config('spark.jars.packages', 'ml.combust.mleap:mleap-spark_2.12:0.23.1') \ .getOrCreate()
And the results remain the same. The attributes are lost
Looks like the op isn't serializing the impurities right now.
Looking at what the withImpurities
is doing, it seems that is extra meta-data that can aid in debugging, but that isn't critical to inference tasks. Excluding the impurities is to reduce the bundle sizes.
Hello @jsleight
The impurities are important for explainability for example. Shap library use them to calculate shap values.
Yeah for sure. But I'd argue that serializing to mleap is for inference tasks. To do evaluation and introspection you could just
pipeline.save(path)
pipeline.load(path)
using spark's built in functions. Then serializeToBundle
when you're ready to productionize the model.
Hello @jsleight
I have no knowledge of Scala but I think I understood how objects are serialized internally.
What do you think about the possibility of an additional parameter in serializeToBundle and deserializeFromBundle that allows us to send a Map with: Key: Canonical Name of the class that you want to Serialize in a special way. Value: Custom Ops to apply to that class
And then in the BundleRegistry check if a class is in the new map or if it not, use the defaults
With this perhaps users could create their own ops and add and change attributes.
Ah, in mleap you can do that exact process by altering the ops registry. We use it for xgboost in order to allow xgboost models to be serialized in different ways depending how you want to serve them. See this readme and associated xgboost-runtime code as an example.
Using this process, your approach would be to:
Issue Description
Pyspark DecisionTreeRegressionModel loses values in attributes after packaging and loading them.
Minimal Reproducible Example
mleap version: 0.23.1 pyspark version: 3.3.0 Python version: 3.10.6
If we take a look to the created model, we can see that nodes have different attributes.
If I save and load the model the results are: