-
Might pdp be extended to work directly with output from Keras3? Also, variable importance plots would be a nice addition. There are number of related ideas in Molnar’s nice survey book “Interpretable …
-
## Overview
The purpose of this issue/document is two-fold:
- to summarize the regressions analyses of the project in a **textual form**
- to summarize the results of these regression analyses in …
-
Hi @mattn, I wanted to use XGBoost for quantile regression but found that the loss function of pseudo Huber error does no better than a null model. Currently, `objective = 'reg:pseudohubererror'` pred…
-
Homework 5 is complete. I think I discovered most of the way through this assignment the intended method to approach these assignments. Thanks for being patient. #
-
I am looking to extract the data underlying feature importance plots and the stepped line plots for individual features in order to make custom piots.
Is there a way for determining at what variabl…
-
User has suggested that they would like to have feature/variable importance matrix/plots available for the stacked ensemble models in the AutoML leaderboard.
-
The ranger random forest model does provide an OOB estimate of error (the kind you would get by testing your model on a 'test' data subset), but I've been struggling to put to rest the question of mod…
-
Great download function. The plots are not visible in the .md file. A better idea of the variables would be helpful to understand the EDA, like pay_1, pay_2, etc. How does selectKBest work? How will t…
-
Option either in gbm.auto or separate function to take line plots (feasibly as linesfiles csvs) and overlay multiple lines on the same plot to save space and allow comparisons.
Bin & gaus obviously, …
-
Currently interpret_model() returns plots. It would be useful to return also a data structure (e.g. dataframe) that will contain the important variables and their relative importance, for downstream t…