-
ollama support at this link https://github.com/win4r/MoA
win4r updated
3 months ago
-
### What is the issue?
I'm using the OpenAI .Net library to connect to Ollama, using the default llama3.2 model. I get an "Unknown ChatFinishReason value." error from the library. You can see in belo…
-
I want to deploy it via ollama, so I firstly convert it to .guff file by llama.cpp's convert_hf_to_guff.py,but I got an error that KeyError "",so I found it not in added_tokens_decoder of tokenizer_c…
-
**Is your feature request related to a problem? problem showing "There was an error processing your request: No details were returned
"
in the console its showing "hook.js:608
Warning: Encountere…
-
### What is the issue?
I hope that this a PEBCAK issue and that there is quick environment setting, but with my searching I couldn't find one.
## TL;DR
When using the [Continue Plugin](https://…
-
### Describe the bug
When using llamaindex complete, internally it will call complete() -> chat() both have a token count (recursive), and these are summed.
### To reproduce
Codesnippet to reproduc…
-
Has anyone succesfully run a qdrant binary on a runpod container? I'm trying to run qdrant on my runpod so that I can have it store my embeddings.. Here's my dockerfile for the build , I install the …
-
### What happened?
With the llama.cpp version used in Ollama 0.3.14, running a vision model (at least nanollava and moondream) on Linux on the CPU (no CUDA) results in `GGML_ASSERT(i01 >= 0 && i01 < …
-
Add LLM inference support for https://lmstudio.ai/
Implementation notes:
- Use appropriate SDK: https://github.com/lmstudio-ai/lmstudio.js
- Target directory: `src/adapters/lmstudio` with files `…
-
Upon downloading the .dmg installation file, I am getting an error:
**"ollama-grid-search is damaged and can't be opened. You should eject the disk image."**
I am running on a M2 MacBook.