-
# Prerequisites
When I install via pip install llama-cpp-python, there will be an error. It will occur on versions 0.2.81 and 0.2.80, The version 0.2.79 can be successfully installed.
python 3.11…
-
Hello!
I am evaluating the llava-next-llama-3-8b model using lmms-eval, meeting this bug:
```
File "lmms-eval/lmms_eval/models/llava.py", line 358, in generate_until
conv = copy.deepcopy(con…
-
I tried the demo code and got an error:
```
from llava.model.builder import load_pretrained_model
from llava.mm_utils import get_model_name_from_path, process_images, tokenizer_image_token
from ll…
-
# Current Behavior
I run the following:
CMAKE_ARGS="-DGGML_CUDA=on" pip install llama-cpp-python --verbose
an error occured:
ERROR: Failed building wheel for llama-cpp-python
# Environment …
-
# Prerequisites
pip install llama-cpp-python --verbose
# Environment and Context
```
$ python3 --version
Python 3.12.3
$ make --version
GNU Make 3.82
$ g++ --version
gcc (GCC) 11.2.0
```…
-
Thanks a lot for your excellent job. I wonder how you evaluate the trained model, do you use ./scripts/more/eval/pope.sh, which uses llava.eval.model_vqa_loader for evaluation (seems no modification f…
-
llamafile is a local app (similar to llama.cpp) to run llms in a distributed way from a single file
library can be used on both `.gguf` and `.llamafile` files
repo : https://github.com/Mozilla…
-
**Describe the bug**
```
model_type="llava-llama-3-8b-v1_1"
CUDA_VISIBLE_DEVICES=0 swift infer \
--model_type $model_type \
--infer_backend lmdeploy
```
Error:
```Traceback (most re…
-
I just follw the step, but when I run the following code :
# Load model directly
from transformers import AutoModel
model = AutoModel.from_pretrained("Efficient-Large-Model/Llama-3-VILA1.5-8B")
…
-
This is the workflow I use.
[00 - Simple llava.json](https://github.com/user-attachments/files/16831533/00.-.Simple.llava.json)
------------------------------------------
Here is error
# ComfyUI…