-
Hi,
I am running Windows 11, Python 3.11.9, and comfyui in a venv environment.
I tried installing the latest llama-cpp-python for Cuda 1.24 in the below manner and received a string of errors. Can a…
-
### Question Validation
- [X] I have searched both the documentation and discord for an answer.
### Question
Hello, I am wondering why the rel_props are not being saved to my graph index persistent…
-
### System Info
tgi docker image 2.0.4
### Information
- [X] Docker
- [ ] The CLI directly
### Tasks
- [X] An officially supported command
- [ ] My own modifications
### Reproduction
Passing th…
-
### Bug Description
I like to create a status_checker api endpoint in fastapi to track the creation of chromadb embeddings. Also I like to create these emebeddings in async mode. Below mentioned the …
-
I downloaded the llama3 8B Instruct weights directly from the Meta repository (not Huggingface) https://llama.meta.com/llama-downloads. I then tried to run the convert script using the command sugges…
-
### What is the issue?
I tried 1xH100 box and got an error during installation. Got the same output from another bigger 2xH100 box too:
```
root@C.11391672:~$ curl -fsSL https://ollama.com/instal…
-
### What happened?
For some reason, when I use the llama.cpp code in my project on T5 models, I get this error:
```
ggml.c:5278: !ggml_is_transposed(a)
```
At the same time llama-cli built with…
-
### Feature Description
Hi everyone,
We're currently working on an [open-source](https://github.com/merlinn-co/merlinn) project that uses llama-index in order to ingest + embed some data into Ch…
-
# Prerequisites
Please answer the following questions for yourself before submitting an issue.
- [ ] I am running the latest code. Development is very rapid so there are no tagged versions as of…
-
### What happened?
After last week's updates llama-cli (former main) either chats with itself, outputs random tokens, or stops answering altogether. The problem is the same on CPU and on NVIDIA GPUs…