-
Here is the Google Colab link I used for fine-tuning :
[https://colab.research.google.com/drive/1kiALBR1UarPobiftZmiHfwFyk7hTCDnV?usp=sharing](url)
When I fine-tune the LLM-embed for tool retriev…
-
Hi there,
I was searching how people implemented CLIP and found this repo. Problems/differences with the loss function based on the [CLIP Paper](https://arxiv.org/pdf/2103.00020.pdf):
1) If you …
-
Here are some ideas and potential areas of research for Tensort:
- Model analysis and interpretability: Develop new techniques for analyzing and understanding what large language models have learned …
-
Hello, thanks for your excellent work.
I try to use contrastive learning in revelant object detection tasks (e.g. semi-supervised object detection), and I write my contrastive loss code by referering…
-
I follow the guidance to finetune the embedding model and reranker model in my tasks and get good performance. Big thanks to your guys!
My question is that, is there any documents or paper to explain…
-
Hi!
Why do you use the theshold (0.5) in cal_CIoU, although the training doesn't give any information about the 0.5? In other words, is it just from the hyp-param tunning, or reasoned from mathemat…
-
-
#NIPS2017
Institute: CUHK
URL: https://arxiv.org/pdf/1710.02534.pdf
Keywords: Image Captioning, Contrastive Learning
Interest: 2
Code: https://github.com/doubledaibo/clcaption_nips2017 (Not yet…
-
https://www.deepmind.com/blog/tackling-multiple-tasks-with-a-single-visual-language-model
## Motivations
* We want a solar PV forecasting model which:
* Can handle large inputs (e.g. the whol…
-
### 論文へのリンク
[[arXiv:2002.05709] A Simple Framework for Contrastive Learning of Visual Representations](https://arxiv.org/abs/2002.05709)
### 著者・所属機関
Ting Chen, Simon Kornblith, Mohammad Norou…