-
Hello,
First of all thank you for your work on llamafile it seems like a great idea to simplify model usage.
It seems from the readme that at this stage llamafile does not support AMD GPUs.
The…
-
### What is the issue?
Error: llama runner process has terminated: signal: aborted (core dumped)
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
1.40
-
### OS
Microsoft Windows 10 Enterprise
Version 10.0.19045 Build 19045
from cmd shell:
```
wget https://huggingface.co/jartine/llava-v1.5-7B-GGUF/resolve/main/llava-v1.5-7b-q4-server.llamafile…
-
# Prerequisites
Please answer the following questions for yourself before submitting an issue.
- [x] I am running the latest code. Development is very rapid so there are no tagged versions as of…
takis updated
10 months ago
-
### Describe the issue
Issue:
Hey there - huge fan of your work.
I'm trying to integrate LLaVa into a training framework I've been running into the same issue again and again. When I try to tr…
-
I have Xcode installed, though getting this on my M1, 8GB:
```
llm_load_tensors: VRAM used: 0.00 MB
...............................................................................................…
-
Hi,
lscpu gives.....
```
> lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 36 bits physical, 48 bits virtual
Byte Order: Li…
-
How to connect to it using API ? i've installed it and it works great but i want to create to it using api
-
are there plans to support this [new SOTA open source vision model](https://sharegpt4v.github.io/)?
--despite its compact size, the model is able to extract text from images with incredible accurac…
-
### Describe the issue
Issue: As shon in this [issue](https://github.com/haotian-liu/LLaVA/issues/62), the training loss in coonvergence should be lower than 2 for `llava-vicuna-chat-hf-pretrain`. Ho…