-
hi @kaiyuyue
i see the [lines](https://github.com/kaiyuyue/nxtp/blob/main/src/encoding.py#L115-L119) in `encoding.py`, for what reason to miss these samples? for imagenet-1k dataset, many categories…
-
It would be amazing if emotion markers can be supported (or if they already are, documentation on how to use them), for example providing indicators like ``, ``, etc. or use of emoji's for the same.
zclch updated
7 months ago
-
not clear how that's better than slurm or spark but seems fun
-
Hi,
I am trying to search the hosted clip-retrieval backend index by text and end up getting an internal server error.
Here is the code I used:
```python
from clip_retrieval.clip_client import C…
-
I try to finetune CLIP model by using pretrained: https://huggingface.co/laion/CLIP-ViT-B-32-laion2B-s34B-b79K
But I met a bug:
![image](https://user-images.githubusercontent.com/124332581/22967958…
-
Hi all!
What are the datasets used in the pre-trained model provided in the Google link?
Were [630k-audioset-best.pt](https://huggingface.co/lukewys/laion_clap/blob/main/630k-audioset-best.pt) and […
-
https://github.com/FreddeFrallan/Multilingual-CLIP
should be possible to add it simularly to https://github.com/LAION-AI/CLIP_benchmark/blob/main/clip_benchmark/models/japanese_clip.py
do you …
-
Hi ! Thanks for your Disco paper and explanation for the TSV file preparation.
In the composite yaml file, you have a 'caption linelist' file which is used.
caption_linelist: train_TiktokDance-coc…
-
I am trying to verify/reproduce your paper's validation results **without training** it myself and expected 42.6% R@1 accuracy for MSR-VTT.
But when I follow the instructions from [TRAIN_AND_VALID…
-
Hey there!
It's Robert from LAION! Congratulations on this really interesting dataset release!
I was just wondering if it was possible for you to release details on your internal watermark detec…