-
It is expected that this model would take time to train due to the enormous state size, but the trends do not indicate improvement at all. Because of this, I am very confident that there needs to be s…
-
So I was wondering about my performance with different keys and had a look at the graphs. No matter how I turn it, this seems to wrongly show a quite positive graph, but a learning rate of -0.4 wpm. N…
-
I'm very interested in your paper “Towards Expansive and Adaptive Hard Negative Mining: Graph Contrastive Learning via Subspace Preserving”. Can you provide the corresponding code? Thank you very much…
-
[Graph contrastive learning with augmentations](https://proceedings.neurips.cc/paper_files/paper/2020/hash/3fe230348e9a12c13120749e3f9fa4cd-Abstract.html)
```bib
@article{you2020graph,
title={Gra…
-
https://arxiv.org/abs/1503.03832
-
Newest result in e-discovery:
[Scalability of Continuous Active Learning for Reliable High-Recall Text Classification](http://dl.acm.org/citation.cfm?id=2983776) mentioned one technique to tackle th…
-
## 論文リンク
- [arXiv](https://arxiv.org/abs/2011.08435)
## 公開日(yyyy/mm/dd)
2020/11/17
## 概要
## TeX
```
% yyyy/mm/dd
@inproceedings{
hu2021adco,
title={AdCo: Adversarial Contrast…
-
Intuitively, by using hard negatives, we are trying to push away random negatives with high logits away from the true positive. Since the negatives are random, isn't this forcing model at t+1 to be dr…
-
Dear Professor,
In the code you provided, the value of Z ranges from 0 to 1, while in the actual data, Z ranges from 0 to -1. This discrepancy in standards appears to cause a misalignment in the re…
-
When I was studying Protein Deep Learning, I encountered a new problem. When I ran to this part
information = {}
count = 1
failures=[]
for code,tm,numr,wt_val,mut_val in zip(codes,tms,resnum,wt,…