-
Hi, i want to do my own chat AI interface for study and personnal project. I find the library interesting, the only problem is that I use Ollama as LLM provider (local server for llms). So Ithink that…
-
### Short description of current behavior
I created engine, model and agent. But when i try to query agent , it shows an error.
provider: ollama
engine: langchain_engine
model: llama3
``` An …
-
* https://github.com/TheR1D/shell_gpt/wiki/Ollama
* config
`CHAT_CACHE_PATH=C:\Users\Y00655~1\AppData\Local\Temp\chat_cache
CACHE_PATH=C:\Users\Y00655~1\AppData\Local\Temp\cache
CHAT_CACHE_LENGTH=…
-
### Describe the bug
- Cloned the repo
- Installed everything needed
- Created the modelfile FROM qwen2.5-coder:7b
PARAMETER num_ctx 32768
and run the query on powershell but either i don't see o…
-
Hey I loved this extension which is very useful for my university thesis writing. When i use the "extend selection feature" (I use ollama btw ) the application just stops responding and goes into wait…
-
I can't get any version of my LMPs to have a commit message when trying to use Ollama for this. Note that I'm also using Ollama to run the LMP, so could it be that the local Ollama server is blocked b…
-
### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a sim…
-
Hi Team,
I am already using LMStudio and OLLAMA for model deplyments. Given this model is LMCPP compatible and uses that. How can this model be deplyment, hosted and used with LMStudio or OLLAMA. It …
-
TLDR: Add [Ollama](https://ollama.com/) Component to Aspire similar to the [OpenAI](https://learn.microsoft.com/en-us/dotnet/aspire/azureai/azureai-openai-component?tabs=dotnet-cli) component.
## C…
-
### What is the issue?
If I try to run the `llama3.2-vision` model using `ollama run llama3.2-vision` on my Arch Linux machine, I get this error:
```
Error: llama runner process has terminated: GG…