-
It is hoped that the ollama platform can add the model InternVL-2 series.
-
# Prerequisites
Please answer the following questions for yourself before submitting an issue.
- [x] I am running the latest code. Development is very rapid so there are no tagged versions as of…
-
Hi,
I was trying to run llama2 in my local computer (Windows 10, 64 GB RAM, GPU 0 Intel(R) Iris (R) Xe Graphics). Got following error -
1. raise RuntimeError("Distributed package doesn't have N…
-
### Prerequisites
- [X] I am running the latest code. Mention the version if possible as well.
- [X] I carefully followed the [README.md](https://github.com/ggerganov/llama.cpp/blob/master/README.md)…
-
I had a longer roleplay chat using 0.8.1 with a Q4 k_m gguf quant of L3-8B-Lunar-Stheno. It worked well, until the #42 message, there it took 19 minutes before it replied, like it had to reprocess the…
-
### Bug Description
I'm very new with langflow and i was interested to leverage vllm with langflow via the openapi
I tired with a few llava models without success
langflow does not recognize the na…
-
Wouldn't be better/easier to pass all the model to GGUF format so using LamaCPP of even rely on Ollama technically you can setup whatever model you want ? just asking
-
### What happened?
When using an embedding model via Ollamas API, Lama.cpp has an assertion error: `Bug: Assertion '__n < this->size()' failed.`
I tried nomic-embed-text-v1.5 and all-minilm.
It w…
-
**When running PGPT it fails and gives the following error**
Traceback (most recent call last):
File "C:\TCHT\privateGPT\privateGPT.py", line 87, in
main()
File "C:\TCHT\privateGPT\privat…
-
Whatever I try, I always get this error,
venv/lib/python3.10/site-packages/llama_cpp/llama_chat_format.py", line 1959, in __call__
self._llava_cpp.llava_image_embed_make_with_bytes(
TypeError:…