interpretml / interpret

Fit interpretable models. Explain blackbox machine learning.
https://interpret.ml/docs
MIT License
6.04k stars 714 forks source link

Support for more parameters in the Differentially Private models #524

Open sadsquirrel369 opened 1 month ago

sadsquirrel369 commented 1 month ago

Does the Deferentially Private models support monotonous constraints and other objective functions, given that they inherited from the same base EBM model?

paulbkoch commented 1 month ago

Hi @sadsquirrel369 --

Regarding alternative objectives. I'll let @Harsha-Nori (the primary author of the DP-EBM paper) comment on that in more depth, but my limited understanding is that the DP proof would need to be updated for each alternative objective.

For monotonicity, we currently support two types of monotone constraints. Please note that we currently recommend the use of post processed monotonicity when monotonicity is being used in the context of responsible AI. If monotone constraints are applied during fitting, the model will often be able to shift any monotone violations to other correlated features (see: https://github.com/interpretml/interpret/issues/184 for more details). We only recommend monotone constraints during fitting when you are 100% sure the underlying generation function is fundamentally monotone, like in the case where you have a feature that comes from of a physical system, or for investigative purposes where you're curious to find out where a model would shift effect when monotone constraints are applied (you can do a model diff in this case).

The post processed model editing monotonicity should be fine to use with DP models since it operates on the public model after fitting. (https://interpret.ml/docs/python/api/ExplainableBoostingRegressor.html#interpret.glassbox.ExplainableBoostingRegressor.monotonize). Applying monotone constraints during fitting for DP models is currently not supported, but I think it would be possible to add this. @Harsha-Nori can correct me if I'm wrong, but I believe the noisy update is fully public information on this line:

https://github.com/interpretml/interpret/blob/fe10ac33392e342ab78eb28d1a1061d866b7a4e3/python/interpret-core/interpret/glassbox/_ebm/_boost.py#L166

And the monotone constraints would be honored if the update was disallowed or adjusted when the update would otherwise contain a monotone violation.