-
Hi,
May I know if espnet supports self-supervised loss with conformer training for unsupervised ASR, such as contrastive loss?
Look forward to your reply!
-
According to the paper, the negative component of the contrastive loss is the difference between the negative states (randomly sampled from embedding at timestamp t, (z_{t}~)) and the ground truth sta…
-
Hi,
First of all, great contribution towards the field of image manipulation. Could you please provide information on how many GPUs and how much duration it took to train the model?
Thanks,
Him…
-
raceback (most recent call last):
File "train.py", line 603, in
main() # pylint: disable=no-value-for-parameter
File "E:\anconda3\envs\diffusionGAN\lib\site-packages\click\core.py", line 1…
-
That is to say, supervised contrastive loss and self-supervised constrative loss, the former can be regarded as global and the latter is local.
-
### Model description
Contrastive Audio-Visual Masked Autoencoder (CAV-MAE) combines two major self-supervised learning frameworks: contrastive learning and masked data modeling, to learn a joint and…
-
Going to look into this in detail now, but got the following error while running the imagenet ipynb
```
RuntimeErrorTraceback (most recent call last)
in ()
8 prob_outputs_dog = Variable(t…
-
Hi!
When I read your source code, I found you set vocab_size = self.codebook_size + 1000 + 1 in token embbeding stage. Why not directly set vocal_size=self.codebook_size? What does the extra 1001 em…
-
proto_nce_loss = self.proto_reg * (proto_nce_loss_user + proto_nce_loss_item)
你好,请问在进行Contrastive Learning with Semantic Neighbors时,你的self.proto_reg这个参数设置得很小,请问有什么规律吗
-
Hi, I found that the loss used in this repo is a cross-entropy loss between prediction and mask.
`loss = F.binary_cross_entropy_with_logits(pred, mask)`
But the loss mentioned in the paper is a …