-
Hi Jenny,
I find it really interesting of this GNF work. And I have a question in the model design for the graph generation in Figure 2 of the original NeurIPS 2019 paper.
![image](https://user-im…
-
### Description
I was wondering if there was a way to share a link to my runs while hiding the group name. This would allow me to share wandb links with reviewers for double-blind conferences (ICLR, …
-
Would you be interested in a variant to NB that is more recent and performant?
Rennie, J. D., Shih, L., Teevan, J., & Karger, D. R. (2003). Tackling the poor assumptions of naive bayes text classif…
DrDub updated
2 years ago
-
Great review, everything seems to be in line with instructions. Good formatting.
There seems to be a spelling error in 'CLIP' section. I believe it would be 'Given'.
The introduction section needs…
-
# vanishing and exploding gradient / sensitivity
- (**must see**) X. Glorot and Y. Bengio. Understanding the difficulty of trainingdeep feedforward neural networks. InAISTATS, 2010.
- (**must see**)…
-
Hi, I'm trying to use dopamine to replicate the published result of C51 on breakout, which seems be around 700. However, it looks like my training trails are stuck around 400-500 after a rapid increas…
-
## 論文リンク
https://arxiv.org/abs/1306.0239
## 公開日(yyyy/mm/dd)
2013/06/02
ICML 2013 Challenges in Representation Learning Workshop
## 概要
L2-SVMを用いることで、CNNとSVMのlossの結合を可能にした。
## 関連
- [An Arc…
-
Investiguer avec les équipes de l'ANSSI les outils utilisés en aval de la rédaction pour la production de la version print du guide.
L'hypothèse actuelle est que le format d'entrée sera du `.docx` …
-
Venue: ICML 2019
Summary: Proposes a simplified linear graph neural network architecture (GCN with non-linearity layers removed). New architecture is significantly faster than the state of the art mo…
-
Hi!
I'm new to GPytorch, and am currently working on the project that requires a heteroskedastic GP that can fit the noise model without direct noise observations (I'm aware of the `Heteroskedastic…