-
### What happened?
https://github.com/nomic-ai/gpt4all/issues/2204
Since I upgraded to gpt4all 2.6.2 (which updated llama.cpp) my speed dropped from 3-4 t/s to 1 t/s. I am getting 1/3 the speed acro…
-
### Bug Report
"The newly installed gpt4all cannot add models, and the error message is 'network error: cannot retrieve http;//gpt4all,io/models/models3,json'."
### Example Code
### Expecte…
-
### Before submitting your bug report
- [X] I believe this is a bug. I'll try to join the [Continue Discord](https://discord.gg/NWtdYexhMs) for questions
- [X] I'm not able to find an [open issue](ht…
-
### Bug Report
There was a noticeable slowdown of doing inference on LLMs. Something like 30-40% less tokens / second.
This change affected both CPU, Cuda and Vulkan backends.
This regression…
-
修改了配置文件中的嵌入模型的名字和api_base:
```
embeddings:
## parallelization: override the global parallelization settings for embeddings
async_mode: threaded # or asyncio
llm:
api_key: ${GRAPHRAG_AP…
-
### What happened?
I wanted to use the Kompute version to run on my GPU (Radeon RX570 4G) but whenever i use the `-ngl` argument to offload to GPU, `llama-cli` silently exits before loading the model…
-
### What is the issue?
It's again the #4334 issue. I have multiple local CPU nodes and using ollama behind the litellm proxy.
The issue is with embedding call for **snowflake-arctic-embed** model.…
-
entity_graph
0
-
Moving from `OllamaEmbeddings(model="llama2:13b")` to `OllamaEmbeddings(model="llama2:7b")`, I am now getting a shape mismatch in my embeddings:
```none
ValueError: shapes (4096,) and (5120,) not …
-
After running `gaianet init`. No pulic url and dashboard got exported.
Using Ubuntu 22 Docker container
```
[+] Checking the config.json file ...
[+] Downloading Phi-3-mini-4k-instruct-Q5_K…