-
Hi, thanks for this new techniques for extending MLLMs for interleaved documents.
I had a doubt regarding visual encoder in fig-2 and section 3.
In figure 2, as I understand that "during t…
-
See example output below. The example does not work - no "human input" is ever sought - and lacks any explanation of how the feature is supposed to be used, making it useless.
```
[DEBUG]: == Wor…
-
Do you have plan for supporting Chinese evaluation of MLLM of the MME benchmark
-
Hi, thanks for the good work. I was confused after reading the paper about the LCL training process, i.e. sections 3.1 and 3.2. What are the inputs and outputs given to MLLM during LCL training? What …
-
### Describe the issue
Issue:
When I tried to use cli.py locally to perform inference on lava-v1.5-13b, I was prompted that the checkpoint could not be loaded, but in fact both files existed.
Comma…
-
### Describe the feature
For now, only MiniGPT-4 is supported with MME dataset. I wonder if it's possible to further support MLLMs like Instruct Blip and LLaVA.
I also have a problem about the MME…
-
### Describe the bug
prompt是采用multimodal/models中的minigpt4和instructblip中的样例进行prompt吗?还是有设计prompt,我是用样例的简单prompt复现mplug时远不及论文中的验证集的49%
'''python
img_prompt = '###Human: '
if '…
-
Just did a very simple run with llama-7b-4bit. It... took a while. Had it run in a screen. But, it worked!
```
root@FriendlyWrt /s/o/llama.cpp (master)# time ./main --color -m models/ggml-model-q4…
-
# Liangyu's work
- [ ] add support for Vicuna pretrained LLM, https://github.com/lm-sys/FastChat#vicuna-weights. @liangyuch
- [ ] support interactive chat (may init from alpaca or vicuna to better…
-
The performance of GIT2 in the leaderboard is quite impressive. It only has 5.1B parameters. The original paper was published in 2022 and their repository has not been updated since March 2023. The or…