-
### Is your feature request related to a problem? Please describe.
The current IPL Prediction model in Project-Guidance/Machine Learning and Data Science/Intermediate/IPL Prediction/Regularisation - …
-
I wonder if you have considered the lime package, which was a sort of a buzz in the machine learning interpretability sphere. It seems you are using feature importance for after training the Random Fo…
-
Abstract: Disentangled representations, where the higher level data generative factors are reflected in disjoint latent dimensions, offer several benefits such as ease of deriving invariant representa…
-
Pose a question about one of the following possibility readings:
“[The unreasonable effectiveness of deep learning in artificial intelligence](https://www.pnas.org/content/pnas/early/2020/01/23/19…
-
ALE currently supports numerical features only. An extension to categorical features is possible, but comes with serious caveats for interpretability (see https://compstat-lmu.github.io/iml_methods_li…
-
### Task motivation
Gene Regulatory Network (GRN) inference is pivotal in systems biology, offering profound insights into the complex mechanisms that govern gene expression and cellular behavior. Th…
-
We currently have less data for training Machine Learning models. I suggest that we try these approaches and compare the performance:
1. Bootstrapping to augment the data, and training models on this…
-
thanks for presenting! related to #10, i'm curious whether you've looked at all at using models with these features to predict voting behavior for elected officials throughout time? I'd suspect that c…
-
Hi,
I've been using ``tf-keras-vis`` for a while and wanted to speed things up by using more than one GPU to compute the saliency maps of multiple images. I noticed that when using the interpretab…
-
Read through from c2 - c10 of the books, gathering knowledge asap.
#### Summary
Machine learning has great potential for improving products, processes and research. But computers usually do not …