-
Installed the model locally via Ollama, and ran the `ollama serve` successfully in the command line.
but keep seeing `Ollama is not running in your IDE` error on VScode and chat/other functionalities…
-
While I understand using gpt4 gives the best results, the landscape changes very quickly. Also some users have strict security requirements to run only local llms.
Instead of trying to support all …
-
### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain.js documentation with the integrated search.
- [X] I used the GitHub search to find a …
MP242 updated
2 months ago
-
chatollama-1 exited with code 139
chatollama-1 | Listening on http://[::]:3000
-
### Issue:
Error "Exiting chain with error: invalid character '
-
### What is the issue?
The main_gpu option is not working as expected.
My system has two GPUs. I've sent the request to `/api/chat`
```
{
"model": "llama3.1:8b-instruct-q8_0",
"message…
-
### The problem
Sometimes, without warning, the chat window fails to insert the prompt. The chat window is functional, however. The error is not easily reproducible has it seems to happen randomly.
…
-
I really appreciate the Ollama project - many thanks to everyone involved in this awesome software!
Unfortunately I believe I have just found an issue. The recently published Qwen 1.5 model support…
-
Ollama chat function is empty.
-
It would be great to rename chat conversations and even be able to press a button to ask the AI to do so. I tried to ask the AI but it is always too wordy. It would be preferable to set up a conversat…