-
Thanks for the curated list of multimodal LLM. We have a related work that we hope is added to this awesome repository.
Accountable Textual-Visual Chat Learns to Reject Human Instructions in Imag…
-
Such exciting project list of multimodal LLM. We have a related work that we hope is added to this awesome repository.
Paper Title: LMEye: An Interactive Perception Network for Large Language Model…
-
### System Info
https://github.com/open-mmlab/Multimodal-GPT
Are there any good ways to quantize open-flamingo?
I found after using prepare_model_for_kbit_training the flamingo_init() is reve…
-
Thanks for the curated list of multimodal LLM. We have a related work that we hope is added to this awesome repository.
Accountable Textual-Visual Chat Learns to Reject Human Instructions in Image …
-
Thanks for the curated list of multimodal LLM. We have a related work that we hope is added to this awesome repository.
Accountable Textual-Visual Chat Learns to Reject Human Instructions in Image …
-
Great work with this loader, I'm seeing 5x i/s improvements in Ooba and was hopeful that it would help serve up some gains when using ooba's multimodal extension (confirmed working in my current setup…
-
Thanks for the curated list of multimodal LLM. We have a related work that we hope is added to this awesome repository.
Accountable Textual-Visual Chat Learns to Reject Human Instructions in Image …
-
Hi, I tried to run the "biomedgpt_inference.ipynb", but got some errors.
`print("Assistant: ", chat.answer()[0])`
However...,
> Traceback (most recent call last)
> /tmp/ipykernel_118030/340…
-
### Documentation Issue Description
In my company I work for a team that has implemented a lot of RAG solutions. This is all build ad-hoc from research notebooks. Trying to connect the different pa…
-
Hi, thanks for compiling this list! I hope to bring the following works from my team to your attention:
- VIMA: General Robot Manipulation with Multimodal Prompts. ICML 2023. https://vimalabs.githu…