-
Please provide instruction son how to evaluate AlphaCLIP MLLM model.
-
Hi
First of all, thanks for your amazing work here.
I’m wondering whether CrewAI would support any MLLM (multimodal large language model)? Since it’s more suitable in my use case. For example, the a…
-
Great Work! I'm interested in unsloth and may I use it to finetune MLLM like Qwen-Vl?
-
Hi! Great work!
Have you tried leveraging MLLM to be the prompt encoder? We have open-source MLLM now, and I think this will be an easy extension but very powerful one. For example, we could give …
-
Refer to this PR https://github.com/intel/llm-on-ray/pull/107 and add one of the models to CI.
-
### Question
ShareGPT is used for instruction fine-tuning, with the aim of inserting data from image independent pure text conversations into multiple rounds of image conversations, so that the model…
-
Hello,
Would you like to support mllm like llava?
```[tasklist]
### Tasks
```
-
Many thanks for your excellent work!
COMM (which has already been on Arxiv on 2023.10 https://arxiv.org/pdf/2310.08825.pdf) already proposed to merge the features of CLIP and DINOv2 to realize MLLM, …
-
你好,我运行了inference.py这个文件,它的输出都是'yes' or 'no',文章中给的指标是F1 score等,如何计算出呢?看到prompt给的是 prompt += "\nIs our caption accurate?\n",所以是不是理解为accuracy =(yes)/(yes+no)?并且是否可以提供py文件,以便我们评测更多的MLLM模型而不是只能使用mplug.j…
-
Hello,
Your "UNIAA" project is very amazing and inspired me a lot.
When would you release other codes in the future?
Looking forward to you reply, Thank you very much!