-
Hi,
When trying to run CLI inference, I ran into the following error. Do you have any idea of how to fix it? Thanks.
> Traceback (most recent call last):
File "/home/yilche/miniconda3/envs/ll…
-
I used the model_vqa.py but I met the mistake for LLaVA-RLHF
-
It seems that both phi-2.Q5_K_M.llamafile and wizardcoder-python-13b-main.llamafile don't support most of the options that they are invoked with in copilot.el .
daym updated
10 months ago
-
### Describe the issue
Issue:
Command:
```
tokenizer, model, image_processor, context_len = load_pretrained_model(model_path = 'liuhaotian/llava-llama-2-7b-chat-lightning-lora-preview',
…
-
The server crashes with "illegal instruction" on my machine.
I tried running as `./mistral-7b-instruct-v0.1-Q4_K_M-server.llamafile --ftrace` - these were the last few lines:
```
FUN 1186 11…
-
Giving the new 4bit quantized option a try, I noticed that 64 GB (52 free) of CPU ram is not enough to load this model. It works fine with the hugginface pipeline due to `low_cpu_mem_usage=True`. T…
-
您好!我目前在尝试使用部分webvid数据在最新的代码上进行微调,但最新的文档关于微调的说明和仓库里提供的文件对不上。自己配了一下visionbranch_stage2_finetune.yaml,采样了2k个视频用1张A100来测试,不知道是不是yaml配的不对,导致loss一直是nan,麻烦帮忙看看。
visionbranch_stage2_finetune.yaml 配置:
```p…
-
Thanks for you repo!
I noticed that you have integrated llava into this project, so I cloned `llava v1.0.2` into this project root dir and installed it using `cd LLaVA ; pip install -e .`.
I am su…
-
I used the `convert-bloom-hf-to-gguf.py` file to convert the Huggingface `bigscience/bloom-7b1` to a ggml model with `f16` successfully:
```
python convert-bloom-hf-to-gguf.py models/bloom-7b1/ 1
`…
-
## Issue:
Try to download this https://huggingface.co/liuhaotian/llava-llama-2-13b-chat-lightning-preview, using:
\# Load model directly
from transformers import AutoModelForCausalLM
model = Aut…