-
### System Info
Thank you for adding support for Medusa. In my comparison of Medusa models versus the original base models with TGI, the latter appeared to be quicker.
I tested the below models:…
-
What could be wrong with this [notebook](https://colab.research.google.com/drive/1_U16w4P5vNulZwdYYhlduldsbP_ENaAN?usp=sharing)?
It trained successfully but it didn't follow the Zephyr syntax in [O…
-
### Discussed in https://github.com/ggerganov/llama.cpp/discussions/4350
Originally posted by **cmp-nct** December 7, 2023
I've just seen CovVLM which is a Vicuna 7B language model behind a 9…
-
I am getting OOM in 40 GB available GPU memory(P4 instance) after increasing number of layers and heads to 4. This commands I am using torchrun --nproc_per_node=1 hydra/train/train.py --model_name_or_…
-
error met as the title. and the whole information is as follows. i wonder whether the version of pacakge "transformers" which is the newest (4.39.3), is proper?
> W&B offline. Running your script …
-
### Search before asking
- [X] I had searched in the [issues](https://github.com/eosphoros-ai/DB-GPT/issues?q=is%3Aissue) and found no similar issues.
### Operating system information
Linux
### P…
-
Observed this issue when attempting to run Llama2-7B 32 x32 token inference on Flex170 x8 DUT. For reference, this DUT is accessible followng the instructions here -- [Welcome to the ISE Lab - ISE Tea…
-
When operating various 7B models (win10, Core I5, GCC64, 8GB, 4 threads) with the same program (relatively indifferent compared between
the recent revisions) I found the ggml-vicuna-7b-4bit-rev1.bin …
wro52 updated
6 months ago
-
Hello. Demo returns the following error message when I try to load the model. Also other model from model zoo can't be imported and throw different errors
`model, vis_processors, _ = load_model_and_p…
-
llava-v1.6-vicuna-7b-Q5_K_M.gguf
llava-v1.5-7b-Q4_K
dont work with images (that are version v1)
i have downloaded the llava model you suggested !
your model
LlaVa 1.5 7B Q5 K is v1.1
thats wo…