-
### Prerequisite
- [X] I have searched [Issues](https://github.com/open-compass/opencompass/issues/) and [Discussions](https://github.com/open-compass/opencompass/discussions) but cannot get the ex…
-
### Describe the issue
Issue: I try to do visual instruction tuning using the pretrained projector liuhaotian/llava-pretrain-llama-2-7b-chat. However, got the following issue. I have download the pr…
llv22 updated
4 months ago
-
Hello,
When I run the Quick Start demo, I encounter the following problem:
File "/home/wxz/LLM/mPLUG_Owl2/mplug_owl2/model/modeling_llama2.py", line 139, in forward
key_states = repeat_kv(…
-
This seems like a great contribution to the MLLM space! I see that your model was evaluated on SEEDBench v1. Would you be able to share the exact scripts and prompts used for evaluation to replicate t…
-
how to load llama checkpoints, can it load checkpoints from those downloaded from huggingface?
-
### Is there an existing feature or issue for this?
- [X] I have searched the existing issues
### Expected feature
so, LM Studio is a self hosting API server option for LLMs, and it's actually buil…
-
Hi,
I'm try to use your code to pretrain llama2-7b, but I find that Megatron-LM have been update recently and some code like 'indexed_dataset' have been removed/changed in the latest code.
D…
-
### Bug Report
GPT4ALL crashes without any warning when using a model with RAM requirements greater than 16 GB. But when I switch version to 2.5.1 or loading a model with RAM requirements under 8GB…
-
Hi,
i'm currently playing around with german language and documents and used the multilingual embedding models quite successfully. However when running LLama 2-Chat-7B i will always get answers in …
-
The speed difference is astounding compared to https://huggingface.co/chat/ when running llama2-70b-chat.
I wonder what I am doing wrong. I have A100 gpus, but the maximum on a single node are 4, …