-
Hi,
Love your work!
Could you please provide information on when the data and code will be released?
Excited to dive in!
Best,
PhucNDA.
-
Do you have any plans to support multimodal LLMs, such as MiniGPT-4/MiniGPT v2 (https://github.com/Vision-CAIR/MiniGPT-4/) and LLaVA (https://github.com/haotian-liu/LLaVA/)? That would be a significan…
-
https://huggingface.co/OpenGVLab/InternVL-Chat-V1-5
We introduce InternVL 1.5, an open-source multimodal large language model (MLLM) to bridge the capability gap between open-source and proprietary…
-
Thanks for your great work! Since `llava_v1_6.json` is not released in `LLaVA-NeXT`, could you please give me some guidance about how to obtain the `llava_v1_6.json` you used in the script below? http…
-
With all the growing activity and focus on multimodal models is this library restricted to tune text only LLM?
Do we plan to have Vision or more in general multimodal models tuning support?
bhack updated
1 month ago
-
Hi @ZCMax ,
This is a great work! Could you let me know when you'll release the code?
-
Thank you very much for your excellent work! We had already run the model by using the demo and found out that the ability of Theia model on feature extraction visualization was not as good as individ…
-
hellow, i download models, and run the demo.py,
File "demo.py", line 12, in
model,preprocess = llama.load("/mnt/home/foundation_model/LLaMA-Adapter/weights/7fa55208379faf2dd862565284101b0e4a…
-
Apologies this is a long post about this Nature article, I have been reading the paper since it come out and I have MANY questions, too many to list in a single post. For some context I have been clos…
-
Unfortunately this issue spans across two repos and I'll try and contextualize what I need fixed from this repo to this repo.
I'm following this research:
https://developer.nvidia.com/blog/enhance…