-
### Description
the titles are not above the correct columns
### (Optional:) Please add any files, screenshots, or other information here.
_No response_
### (Required) What is this issue most clos…
-
Hello dear authors!In the code "image_features, patch_tokens = model.encode_image(image, features_list)", is "image_features" the global image representations? Just like the source code in openclip: "…
-
Hi @LukasMut
https://github.com/ViCCo-Group/thingsvision/blob/ad903f383a8e647057010bd1618b0b8770e8f3a7/thingsvision/utils/alignment/transforms.py#L14
I wanted to ask if the URL is still valid?
…
-
```
encoder_outputs = self.encoder(
embedding_output,
head_mask=head_mask,
output_attentions=output_attentions,
output_hidden_states=output_hidden_…
-
### Model description
openclip:https://github.com/mlfoundations/open_clip
i use openclip to train model and get "epoch_400.pt".
**and i want to convert this "epoch_400.pt" to hf, so i run:**
…
jzssz updated
6 months ago
-
Great work! I've read the paper and it seems the `LLaVA+S^2` is implemented with OpenCLIP VIsion Encoder, and the LLM is finetuned with LoRA. However, the LLaVA baseline you compared with is implement…
-
Paper : [https://arxiv.org/pdf/2406.16860](https://arxiv.org/pdf/2406.16860)
Website : [https://cambrian-mllm.github.io](https://cambrian-mllm.github.io)
Code : [https://github.com/cambrian-mllm/cam…
-
Hello
This looks like a promising app and i like very much your effort in this.
I am trying to develop a full app included frontend for a project of mine using vLogger.
That is quite challenging
H…
-
Hi folks,
Thanks for open-sourcing this work. An external community contributor has ported all MetaCLIP checkpoints to the hub: https://huggingface.co/models?other=metaclip, both in the 🤗 Transform…
-
Currently, we have some if-statements that address model-specific exceptions. Since these are exceptions rather than something general, we want to specify them in the custom model file or move them to…