-
# Prerequisites
When I install via pip install llama-cpp-python, there will be an error. It will occur on versions 0.2.81 and 0.2.80, The version 0.2.79 can be successfully installed.
python 3.11…
-
I tried the demo code and got an error:
```
from llava.model.builder import load_pretrained_model
from llava.mm_utils import get_model_name_from_path, process_images, tokenizer_image_token
from ll…
-
Thanks a lot for your excellent job. I wonder how you evaluate the trained model, do you use ./scripts/more/eval/pope.sh, which uses llava.eval.model_vqa_loader for evaluation (seems no modification f…
-
Hello!
I am evaluating the llava-next-llama-3-8b model using lmms-eval, meeting this bug:
```
File "lmms-eval/lmms_eval/models/llava.py", line 358, in generate_until
conv = copy.deepcopy(con…
-
# Current Behavior
I run the following:
CMAKE_ARGS="-DGGML_CUDA=on" pip install llama-cpp-python --verbose
an error occured:
ERROR: Failed building wheel for llama-cpp-python
# Environment …
-
**Describe the bug**
```
model_type="llava-llama-3-8b-v1_1"
CUDA_VISIBLE_DEVICES=0 swift infer \
--model_type $model_type \
--infer_backend lmdeploy
```
Error:
```Traceback (most re…
-
# Prerequisites
pip install llama-cpp-python --verbose
# Environment and Context
```
$ python3 --version
Python 3.12.3
$ make --version
GNU Make 3.82
$ g++ --version
gcc (GCC) 11.2.0
```…
-
-
I redownload this repo,and tried `transfoemers` version:`4.40.0.dev`、`4.40.0`、`4.41.2`,the result is still `['']`.
some thing i do include:
All weight i use is local weight.below is my change.
1. `…
-
**scenes:**
CLI Inference
**command:**
CUDA_VISIBLE_DEVICES=0 python3 -m videollava.serve.cli --model-path "/root/Video-LLaVA-7B" --file "/root/videos/8132-207209040_small.mp4" --load-4bit
**i…