-
CLIP training example is broken.
raise this error:
`TypeError: 'JpegImageFile' object is not subscriptable`
in this section line 151:
https://github.com/UKPLab/sentence-transformers/blob/f…
-
First of all, thanks for your great work!
And I found out that the `ln_pre` seems to be a redundant component:
https://github.com/openai/CLIP/blob/3b473b0e682c091a9e53623eebc1ca1657385717/clip/m…
-
Loading CLIP model...
ViT-L-14_openai_artists.safetensors: 100%|████████████████████████████████████████| 16.2M/16.2M [00:00
-
### Model description
i want to export clip to text encoder and image encoder two onnx, but it seems can only convert the whole model, how can i seperate clip to two onnx models?
### Open source st…
-
I am trying to apply CLIP on a **very** specific dataset and need to fine tune. I am doing fine tuning following the steps here https://github.com/openai/CLIP/issues/83.
But cannot figure out what …
-
list信息也能找到
{
"clip_id": "4f4df261-4cc4-4e84-b3b2-a2e7dd0e1bee",
"base_url": "../../../range/avf/",
"video": [
{
"id": "5b35cccb-b474-4d82-a659-006f80cdd…
-
The current ImageCaptioning module uses Salesforce's [BLIP](https://github.com/salesforce/BLIP), which is pretty inaccurate at times.
OpenAI's [CLIP](https://github.com/rmokady/CLIP_prefix_caption) c…
-
I tried two gguf conversion on M2 ultra (metal) but no luck. I converted them myself and still the same error.
Here is the first model I tried:
https://huggingface.co/guinmoon/MobileVLM-1.7B-GGUF…
-
I am using finetune_lora.sh with zero3_offload.json to train (context below) and get the following error.
```
Traceback (most recent call last):
File "/deep/u/emily712/GeoChat/geochat/train/tr…
-
Dear researchers,
I just wanted to let you know about some findings I made with your amazing Long-CLIP model; while ViT-L/14 (77 tokens) also shows partial mitigation of the typographic attack vuln…