-
### Question
Hi authors, I recently tried with llama-2, and thanks for sharing the code to support llava with llama2.
1. I downloaded the llava-llama-2-13b-chat and llama-2-13b-chat.
2. Then I c…
-
@zhangry868 @StevenyzZhang
### Describe the issue
Can you please share a json file of results per sample and the overall accuracy for DocVQA and STVQA, along with parameters used for inference fo…
-
### Question
Thanks for the great work~
Also, it looks like A800 cannot enable flash-attn. (error screenshot below)
```
python \
llava/train/train.py \
--model_name_or_path /root/dev…
-
![image](https://github.com/h2oai/h2ogpt/assets/45778128/da67f883-60d1-4af0-9998-928a20d9ca59)
![image](https://github.com/h2oai/h2ogpt/assets/45778128/89fc312f-a1ca-448e-a74c-d09c14a070fd)
> (C:\…
-
### Describe the bug
Last week after updating it stopped working opted for a reinstall.
At the end of the installation there are warnings and errors.
```Installing collected packages: sentencepi…
-
### System Info / 系統信息
Linux,CUDA 11.8
### Who can help? / 谁可以帮助到您?
_No response_
### Information / 问题信息
- [ ] The official example scripts / 官方的示例脚本
- [ ] My own modified scripts / 我自…
-
### When did you clone our code?
I cloned the code base after 5/1/23
### Describe the issue
when I running
> `llava.serve.model_worker` with `liuhaotian/llava-v1-0719-336px-lora-merge-vicuna-13b-…
-
### System Info
https://github.com/open-mmlab/Multimodal-GPT
Are there any good ways to quantize open-flamingo?
I found after using prepare_model_for_kbit_training the flamingo_init() is reve…
-
Hi,
I'm using agx xavier, L4T: 35.4.1, JP:5.1.2, started: riva_quickstart_arm64_v2.12.0. (IP: 192.168.0.40)
and I have one orin agx, JP 6. running docker: dustynv/local_llm:r36.2.0 (IP: 192.168.…
-
### Question
I cloned the latest code and I have tried various versions of the LLaMA-13B weights (including the original one). However, I can not reproduce the results on ScienceQA following the auth…