-
Dear Jianhong:
I'm so honored to read this excellent article which is very helpful for me to start with credit assignment in cooperative multi-agent reinforcement learning. However, unfortunately, I'…
-
In section '5.9.3.3 Estimating the Shapley Value':
https://github.com/christophM/interpretable-ml-book/blob/4759479f5d4287eaf353ab9db6cbeee506e54046/manuscript/05.9-agnostic-shapley.Rmd#L320-L321
…
-
Hi,
I am trying to perform a shapley analysis for survival models and I followed this advice: https://github.com/christophM/iml/issues/83.
After building a learner like this:
```
tsk = makeS…
-
- Add tutorial on LightGBM with early stopping in probatus
-
The documentation says the explain method returns "feature importances" but I would like more information about what these mean. Are these SHAP values? For a classification problem do these relat…
-
Is there a setting or simple way to get global feature information (like Shapely Values) for regression models (Keras layers or sequential models)?
I saw the [Explainable AI](https://cloud.google.…
-
**Is your feature request related to a problem? Please describe.**
although the current version has great support to visualize a single uplift tree, there's no way to visualize the feature importance…
-
### Context
You are matching _people_ based upon topics they've expressed interest in and the match-making history that they have had. The goal is to identify a set of pairs (or triples) that maximiz…
-
A new feature we could add for model understanding is prediction explanation. This would answer the question "why did my model predict x?", allowing users to see which input features were the most imp…
-
Hello! Autosynth couldn't regenerate MachineLearning. :broken_heart:
Here's the output from running `synth.py`:
```
MlV1_ListJobsResponse to clients/machine_learning/lib/google_api/machine_learning/…