-
## Abstract
- Understanding the reasons behind predictions is, however, quite important in assessing trust, which is fundamental
- if one plans to take action based on a prediction.
- when cho…
hon9g updated
5 years ago
-
Dear Author:
I am a graduate student majoring in computer science in China, and my research direction is biomedicine. Recently, I have been reading your article《MGP-AttTCN: An interpretable ma…
-
Hi,
thanks for your time and effort you've put into this project. I wanted to ask, is there any possibility we could get pre-compiled versions/releases so that we don't need to compile anything on…
-
[Visualizing and Understanding Recurrent Networks](https://arxiv.org/abs/1506.02078)
Recurrent Neural Networks (RNNs), and specifically a variant with Long Short-Term Memory (LSTM), are enjoying rene…
-
## 统计历史
The Life, Letters and Labours of Francis Galton, by Karl Pearson
https://galton.org/pearson/
## R 语言
An Introduction to R
https://colinfay.me/intro-to-r/
Outstanding User Interfa…
-
[Why should I trust you?: Explaining the predictions of any classifier](https://arxiv.org/abs/1602.04938)
Despite widespread adoption, machine learning models re- main mostly black boxes. Understandi…
-
-
Pose a question about one of the following possibility readings:
“[The unreasonable effectiveness of deep learning in artificial intelligence](https://www.pnas.org/content/pnas/early/2020/01/23/19…
-
##### **NAME: Joan Amarachukwu IFEANYI**
**PROJECT:** Interpret (https://interpret.ml/, [GitHub - interpretml/interpret: Fit interpretable models. Explain blackbox machine learning.](https://github.c…
-
**Submitting author:** @BirkhoffG (Hangzhi Guo)
**Repository:** https://github.com/BirkhoffG/jax-relax/
**Branch with paper.md** (empty if default branch): joss
**Version:** v0.2.4
**Editor:** @Fei-Ta…