tensorflow / decision-forests

A collection of state-of-the-art algorithms for the training, serving and interpretation of Decision Forest models in Keras.
Apache License 2.0
663 stars 110 forks source link

variable importance option #52

Closed Howard-ll closed 3 years ago

Howard-ll commented 3 years ago

Hello!

First of all, I highly appreciate your efforts for TFDF Found that there are multiple options for variable importance such as NUM_AS_ROOT

variable_importance = model.make_inspector().variable_importances()['NUM_AS_ROOT']

1) Could you let me know which option I should use to get similar importance list as sklearn? 2) Where can I get the detailed descriptions on those options? (How to use, what they mean)

Thank you!

tsachiblauamat commented 3 years ago

I ran the feature significant and compared the results to Sklearn output. not only the results are different, but also the results that I'm getting using this implementation doesn't make any sense(using the info that I have about my data). for example a feature that is constant is one of the most significant features(it got the heights value).

maybe I don't know how to read the output properly? ("data:0.33" (1; #27), 235) this means that feature number 27 got score of 235?

Tsachi

janpfeifer commented 3 years ago

Thanks @Howard-ll , we are happy to hear the tools are useful!

There are various definitions of "feature importance" -- they are all metrics about the model/dataset, but there is not an absolute "truth" or best one.

Now we should have a clear documentation page with the list of all feature importances we support -- with pointers to the papers that define some of them -> making this as an "enhancement" for us to work on.

achoum commented 3 years ago

The list of features importances and their definition is given here in the Yggdrasil user manual.

model.make_inspector().variable_importances() returns a list of tuple ([py_tree.dataspec.SimpleColumnSpec](https://www.tensorflow.org/decision_forests/api_docs/python/tfdf/inspector/SimpleColumnSpec), float) (see doc).

"data:0.33" (1; #27) is the representation of a SimpleColumnSpec object with the format `"{feature name}" (type idx, #column_idx)" (see here). The last displayed value is the variable importance value (235 in this case).

Note that different variable importances have different semantics. Unless specified, the greater the value, the most important the feature. NUM_AS_ROOT is an exemple of exception (see its definition).

Could you let me know which option I should use to get similar importance list as sklearn?

Sklean's "mean decrease in impurity" is likely close to the SUM_SCORE variable importance in. Similarly, Sklean's "based on feature permutation" is likely close to the MEAN_DECREASE_IN_ACCURACY.

Where can I get the detailed descriptions on those options? (How to use, what they mean)

The Variable importance section of the user documentation and the model specific documentation (for example, Random Forest).

The variable_importances() method is used in both the beginner and advanced colabs. However, we have not yet published any example of use of feature importance (e.g. for feature selection).