Closed AkshayTilekar closed 6 years ago
In Python Tree:
The print-out of Python Tree does not use correct numeric type for representing numeric values. For Python Tree, all values are double
, but they should be either float
or int
(and never double
).
In above case 526225.988222 getting changed in to 526226 ...
Feature "AA.SUB" is an integer value, so 526226
is a correct representation.
... and 5192.53104377 into 5192.5312
Feature "BB.SUB" is a 32-bit floating-point value, so 5192.5312
is a correct representation.
I have analyzed the source code of jpmml-converter and found that the way converting Float values to Double is wrong.( in ValueUtil.java ).
JPMML-Converter does not perform arbitrary data type conversions. If the underlying ML framework uses 32-bit math, then all values are represented in PMML using Float#toString()
.
Can you please analyze and see if this is the issue.
This is not an issue. The JPMML software stack is behaving correctly, the generated PMML document is both syntactically and semantically correct, and when evaluated using the JPMML-Evaluator library (don't know about other PMML engines) gives exactly the same predictions as Scikit-Learn.
@vruusmann you say "JPMML-Converter does not perform arbitrary data type conversions. If the underlying ML framework uses 32-bit math, then all values are represented in PMML using Float#toString()." but jpmml-sklearn casts tree thresholds to single-precision float here.
but jpmml-sklearn casts tree thresholds to single-precision float here.
@jnothman But isn't it the case that Scikit-Learn stores threshold values as float64
, but evaluates them as float32
? The X
argument of predict(X)
and predict_proba(X)
methods is explicitly cast into float32
array type, so it would only make sense to compare them against float32
thresholds as well.
The PMML markup holds evaluation-time value representations, so float32
(as implemented by Float#toString()
) is correct (I've got integration tests - with Float#toString()
everything passes cleanly, but with Double#toString()
there is around 1 to 3% bad predictions).
As pointed out in https://github.com/scikit-learn/scikit-learn/issues/9636#issuecomment-325267724, the probable cause for bad predictions in the current case is "pretty formatting of integer-valued floats as integers". Must be the case that 526225.988222
is a special float value, which is declared to be integer-valued by com.google.common.math.DoubleMath#isMathematicalInteger(double)
, but actually isn't so. The problem is that this method gives correct results for Java's double
values, but not for (double)float
values (as returned by Float#doubleValue()
).
Reopening - the pretty formatting code is a bad/premature optimization, and needs to go.
I had forgotten that scikit-learn trees cast data to float32. Sorry for the confusion. But we keep thresholds as doubles when predicting (and fitting as far as I know) so comparison should be at the highest precision, shouldn't it?
I should have someone investigate whether we can support keeping data in 64 bit. But that's not the issue here, which is about comparison to a threshold. I think scikit-learn is always comparing 32bit data to a 64bit threshold so jpmml should probably not downcast.
I'm not certain that the pretty printing is to blame here though I agree that it is liable to produce error.
On 29 Aug 2017 9:06 am, "Villu Ruusmann" notifications@github.com wrote:
but jpmml-sklearn casts tree thresholds to single-precision float here.
@jnothman https://github.com/jnothman But isn't it the case that Scikit-Learn stores threshold values as float64, but evaluates them as float32? The X argument of predict(X) and predict_proba(X) methods is explicitly cast into float32 array type, so it would only make sense to compare them against float32 thresholds as well.
The PMML markup holds evaluation-time value representations, so float32 (as implemented by Float#toString()) is correct (I've got integration tests - with Float#toString() everything passes cleanly, but with Double#toString() there is around 1 to 3% bad predictions).
As pointed out in scikit-learn/scikit-learn#9636 (comment) https://github.com/scikit-learn/scikit-learn/issues/9636#issuecomment-325267724, the probable cause for bad predictions in the current case is "pretty formatting of integer-valued floats as integers". Must be the case that 526225.988222 is a special float value, which is declared to be integer-valued by com.google.common.math.DoubleMath# isMathematicalInteger(double), but actually isn't so. The problem is that this method gives correct results for Java's double values, but not for (double)float values (as returned by Float#doubleValue()).
Reopening - the pretty formatting code is a bad/premature optimization, and needs to go.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/jpmml/jpmml-converter/issues/6#issuecomment-325507713, or mute the thread https://github.com/notifications/unsubscribe-auth/AAEz63iMdBu5O8QswDC7cKNrhgpfjyfgks5sc0f5gaJpZM4PA6Ue .
@AkshayTilekar, @sudarshan1413 - Can you check out the HEAD
version of the JPMML-SkLearn library, build the executable uber-JAR file, and use it to convert your previously problematic Scikit-Learn pipelines?
Does the commit https://github.com/jpmml/jpmml-sklearn/commit/8c1b208749ad9db4a00b91b71c6e69c957dffd92 resolve your problem with mismatching predictions (between Scikit-Learn and (J)PMML) or not?
Hi vruusmann , thanks a lot for that. It is working fine for us and we were able to match results perfectly with the compiled jar from your HEAD version.
We are facing similar problem with r2pmml also , where we are converting xgboost model to pmml and then trying to match its result with R predictor. Here also we are seeing precision issue similar to that of Python . May be similar issue is happening with r2pmml (jpmml-r) also.... Any thoughts?
Thanks @sudarshan1413 for your feedback! I will be releasing an updated sklearn2pmml
package later today, so you should be able to export corrected models straight from your Python scripts soon.
As for XGBoost problems, then it must be something else than the data type/precision of split thresholds. XGBoost does all its computations using the float
data type exclusively. The most common cause for R/XGBoost vs. (J)PMML mismatches is that the XGBoost feature map has been generated incorrectly. For example, see https://github.com/jpmml/r2pmml/issues/26
Hi, I am trying to generate a PMML for Isolation Forest Using sklearn2pmml. While generating a PMML file, variable thresholds are getting changed in PMML file. It shows correct result when we print Tree using Python pickle file but in Actual PMML variable values are changed.
In Python Tree:
In PMML Tree:
In above case 526225.988222 getting changed in to 526226 and 5192.53104377 into 5192.5312
I have analyzed the source code of jpmml-converter and found that the way converting Float values to Double is wrong.( in ValueUtil.java ).
Can you please analyze and see if this is the issue. If yes, can you please suggest any workaround/solution for this issue.