Closed lukedex closed 1 year ago
Hi @lukedex,
This is a great question, but unfortunately we have no support for custom loss functions in EBMs today. It's something we're working towards on our backlog, but it may take quite some time before we can support this.
It is very reasonable that a mismatch of target distribution and loss function could cause performance issues, so I think your intuition is right. Regression EBMs directly optimize MSE right now, so it makes sense that they're performing better on this category and worse in others. Happy to jump on a call and brainstorm if that would be helpful -- feel free to reach out to us at interpret@microsoft.com
. In addition, we'll update this issue if we have support for this in the future.
-InterpretML Team
Hi @lukedex,
This is a great question, but unfortunately we have no support for custom loss functions in EBMs today. It's something we're working towards on our backlog, but it may take quite some time before we can support this.
It is very reasonable that a mismatch of target distribution and loss function could cause performance issues, so I think your intuition is right. Regression EBMs directly optimize MSE right now, so it makes sense that they're performing better on this category and worse in others. Happy to jump on a call and brainstorm if that would be helpful -- feel free to reach out to us at
interpret@microsoft.com
. In addition, we'll update this issue if we have support for this in the future.-InterpretML Team
@interpret-ml
Hey, I did email 2 weeks ago but have no response. Just a bit worried it's been pushed into spam/junk email.
Hi @lukedex,
Really appreciate that you reached out again on here -- managed to find your email, and just sent you a reply. Let us know if you didn't receive it!
hi @interpret-ml, any update on custom error metric?
Hi @nchesk -- We've done some work on custom losses, but it's a big change and it'll be a while (months at least) before it's ready.
-InterpretML team
Hi, any news on this? I would need quantile regression and I am thinking about implementing it myself. Do you have a branch I could fork and work on?
Hi @lmssdd - We've done some work on it, but it isn't ready yet. You might want to look at my response to another loss function question regarding implementation https://github.com/interpretml/interpret/issues/380#issuecomment-1311403623
We work from the develop branch.
Hi @lukedex, @nchesk, and @lmssdd -- I'm happy to report that we finally support alternative objectives in v0.4.0, which has recently been published on PyPI. "poisson_deviance" is one of the supported objectives. More details are available in our documentation: https://interpret.ml/docs/ebm.html#explainableboostingregressor
Great! Thanks a lotInviato da iPhoneIl giorno 16 mag 2023, alle ore 10:06, Paul Koch @.***> ha scritto: Hi @lukedex, @nchesk, and @lmssdd -- I'm happy to report that we finally support alternative objectives in v0.4.0, which has recently been published on PyPI. "poisson_deviance" is one of the supported objectives. More details are available in our documentation: https://interpret.ml/docs/ebm.html#explainableboostingregressor
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you were mentioned.Message ID: @.***>
Hello, Thanks a lot for the hard work and the fantastic tool ! I used it efficiently to benchmark explainable versus non-explainable models in doi.org/10.1007/978-3-031-44064-9_26
Do you still work on allowing users to develop some custom loss functions ? In some fields, using asymmetric loss function proves to be very efficient when there is a significant imbalance between various types of prediction errors. Thanks in advance.
Thanks @JDE65, we really liked your paper and I've added it to the readme.
We're at the point where it's possible to specify custom objectives, but this currently requires a small modification to the C++. I've added an example objective to simplify this process.
To specify that the example objective should be used, you can invoke it this way from python:
ebm = ExplainableBoostingRegressor(objective="example")
The default "example" objective is currently RMSE. To change the objective you need to modify the CalcMetric, CalcGradient, and CalcGradientHessian functions, and then recompile using either "build.sh" or "build.bat".
Those functions are located here: https://github.com/interpretml/interpret/blob/4ad09213836a2f21cfc95567e46c4a37b74b7057/shared/libebm/compute/objectives/ExampleRegressionObjective.hpp#L81-L103
If you implement a nice objective that would be useful to the wider community, please consider contributing it back to InterpretML. I've included some instructions on how to create a new objective with its own tag instead of re-using the "example" tag:
Hello, is it possible to use custom error metrics/loss functions (such as mean Poisson Deviance from the sklearn package) for our model builds?
My dataset has a poisson distribution which I believe is causing some adverse effects when comparing the predictions to that from a GBM on dual lift charts. It performs better on RMSE and Gini but fails quite significantly on mean Poisson Deviance.
Any advice is greatly appreciated.
Thank you Luke