-
Dear openAI researchers,
Thanks for your code release of ebm. I've been really interested in energy-based models since I read Ignor Mordatch's paper _Concept Learning with energy-based models_ …
-
It could be interesting to use the pretrained audio embeddings [Contrastive learning of general purpose audio representations](https://github.com/google-research/google-research/tree/master/cola) to f…
-
Have you ever tried larger models such as ResNet-50/101/152 or DenseNet, wider models like wide-ResNet? And have you ever tried MoCo V2 besides SimCLR? Why finally choose ResNet-18 and SimCLR? This is…
-
Hello, thanks for this project! Despite the MNIST provided example is ok for learning the basis, more real-world indexing and querying examples for contrastive learning use cases would be helpful. By …
-
Hi,
Congratulations on the paper—it is truly interesting! I have a few questions regarding the implementation and the reproducibility of the results.
For the Cityscapes dataset, I downloaded the…
soc12 updated
2 months ago
-
Hi,
Thanks for the great work!!
The paper states that during part-1 training (i.e. CLIP-based Contrastive Latent Representation Learning step) you consider image, text and audio modalities. But th…
-
The BGE-M3 paper mentioned the MCLS (Multiple CLS) strategy to enhance the model’s long-text capabilities without the need for training. Does this repo contain the implementation for this strategy?
-
Hello everyone,
I'm trying to use pytorch-lightning to train PixelCL on 2 gpus using ddp2 accelerator.
I followed [this](https://github.com/lucidrains/byol-pytorch/blob/master/examples/lightning…
-
# Keywords
In-batch negative training
# TL;DR
Train better dense embedding model using only pairs of questions and passages without additional pretraining.
# Abstract
Open-domain question ans…
-
Options are:
1. Write that info to an additional file.
2. Write those columns anyways, and not be truly compliant.
3. See if MEDS can expand the `label_schema` to include additional columns m…