-
## LINKs
[[paper](https://arxiv.org/abs/2405.02246)](https://arxiv.org/abs/2405.02246)
[[models](https://huggingface.co/HuggingFaceM4/idefics2-8b)](https://huggingface.co/HuggingFaceM4/idefics2-8b)…
-
### Question
What is the work involved in adding Llava to Hugging Face transformers package?
I already see InstructBlip in there -- https://huggingface.co/models?other=instructblip and here is the…
-
您好,我在运行的时候出现错误:FViT: EvaCLIPViT: Model config for EVA02-CLIP-B-16 not found;
还有一个问题就是安装xformers的时候显示需要torch2.2.0,但是跟安装的mmcv和torch好像冲突;
请问该怎么解决呢,谢谢~
-
I don't know why I can't import PuLID plugin, I've already done what is required in requirements.txt, but it still can't be imported.
![企业微信截图_20240513102323](https://github.com/cubiq/PuLID_ComfyUI/a…
-
We are working with disabled kids with very limited capabilities. Most of them can't control their hands but control their head (also limited).
So we wrote special application with very large button…
-
![MoCov1](https://github.com/youngtboy/Awesome-Self-Supervised-Vision-Pretrain/assets/66102178/0db281de-9c7c-4292-8e52-2fe125fb4afa)
![MoCov2](https://github.com/youngtboy/Awesome-Self-Supervised-Vis…
-
Dear developers,
I would like to request the addition of support for the EVA-CLIP model in this project. EVA-CLIP is a powerful image-text dual-encoder model that has shown strong performance on a …
-
Hello! I was finetuning from the pretrained_flant5xl and pretrained_opt2.7b models, much to my surprise the flant5xl model is excelling at creating correct labels, as my captions are actually a string…
-
You guys did a great job, I would like to use your dataset to test other models, how can I get MovieChat-1K dataset?
-
Can you share the training log of t2m_trans?
I found it difficult to train t2m_trans.