-
## 📝 Description
A section of this chapter will be dedicated to research papers that cited the current chapter's paper. We'll need to go over all the papers that cite this chapter's paper, decide whi…
-
Hi, is this paper under review in ICLR 2021? The format is similar to the format of ICLR.
-
### News
- Conference
- ICLR 2023: 르완다! (케냐, 탄자니아 서쪽, 콩고민주공화국 동쪽)
![image](https://user-images.githubusercontent.com/11782739/166086698-b848f6aa-c5fb-48fe-b337-76ee61b5d110.png)
- [Hyperscale…
-
We should probably have a list of domains that belong to the same institutions so that they are considered in the conficts.
This is from an email from Andrew:
**google.com and deepmind.com should …
-
### 🚀 The feature, motivation and pitch
## Infeasibility of Frequent Global Synchronization
Nowadays powerful industry users (not necessarily in FAANG) like Cruise, Microsoft, and Tesla are moving…
wayi1 updated
2 years ago
-
1. For CIFAR100-LT
a. Are there different val and test set?
b. On what dataset split do you choose the best-trained model?
c. What split do you use for hyperparam tuning?
2. For iNatura…
-
Thanks for the great work!
I am just wondering why the score numbers of prompt tuning in Table 1 are much less than the ones in other papers? For instance, on MNLI and SST-2, your prompt tuning num…
-
Hi There!
This is a really cool corpus, @fchollet :-)
I'm wondering if a version of this task could be adapted to this ICLR workshop, centered on the construction of a big set of sequence-to-seq…
-
Hi
Very interesting work! I had a similar submission to ACL 2022 (https://github.com/apoorvumang/kgt5) and wanted to ask the following question: Did you try to train SimKGC from scratch ie without …
-
- News
- ICLR 2022 결과가 나왔습니다. 덧 ICLR 는 어떤 학회인가?
- 올해로 10년차
- ICML, NeurIPS가 너무 커짐에 따라 Representation learning에 집중해서 Bengio, LeCun 옹 중심으로 창설
- 초창기 학회부터 VGG, Adam, Seq2seq with atte…