-
Direction changed, txt will be updated soon.
Old stuff:
- 1997: [The Internet: A Future Tragedy of the Commons?](https://link.springer.com/chapter/10.1007/978-1-4757-2644-2_22)
- [Internet Securi…
-
-
Great work. Thank you for your research results.
I'd like to know which text encoder did you use in training process.
Did you use OpenCLIP ViT-H/14 for text encoder and image encoder?
And I wou…
-
Hi, Ziqiao
Thanks for sharing your codebase!
I am interested in your excellent work on "Molecular Property Prediction by Semantic-invariant Contrastive Learning."
I want to follow it, but its c…
-
Hi, thanks for providing this training script for training the CLIP L model. Is it possible to modify it to train CLIP G? I tried but the clip library doesn't have CLIP G. Then I tried using open clip…
-
python main_train.py --devices 0,1 --hp /home/u094724e/aimed2022/MIM-Refiner/src/yamls/stage2/l16_d2v2_custom.yaml
```
#l16_d2v2_custom.yaml
datasets:
train:
template: ${yaml:datasets/cif…
-
Hello,
I'm running out of memory when using contrastive pretraining using the default config. I'm using 8 GPUs with 40GB each and I ran out of memory even when I decreased the batch size to 1024.
-
Hi lin sorry to bother you, I am trying to reproduce the result of infoseek using preflmr model but the numbers are not close.
According to preflmr paper table 2, the reported PreFLMR(G B-v2 1.96B …
-
### System Info
- `transformers` version: 4.40.1
- Platform: Linux-5.15.0-1053-gcp-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.23.0
- Safetensors version: 0.4.1
…
-
Dear Author,
Hello! I recently read your article and am very intrigued by your research.
I attempted to access the relevant code, but unfortunately, I couldn't find **the Code Related to Contras…