-
### How are you running AnythingLLM?
Docker (local)
### What happened?
i'm use ollama nomic-embed-text,upload word get errors.
### Are there known steps to reproduce?
_No response_
-
I thought perhaps more screenshots will help me explain what I mean. Please take a look at these three screenshots. On the left is what I see, it's all the information I have and I'm trying to guess w…
-
### What is the issue?
I am not able to use my AMD Radeon RX 6800S with ollama. When I try, it falls back to CPU. I have installed tried both ollama and a fresh install with the scripts/install.sh …
arael updated
6 months ago
-
### What happened?
I have deployed quivr to ude Ollama.
I deployed on Kubernetes and I did a port-forward in order to have it working locally.
https://github.com/jmorganca/ollama/tree/main/exampl…
-
chatollama-1 | KnowledgeBaseFile with ID: 29
chatollama-1 | URL: http://google.com
chatollama-1 | [nuxt] [request error] [unhandled] [500] Failed to launch the browser process!
chatollama-1 |…
-
### 💻 Operating System
Ubuntu
### 📦 Environment
Docker
### 🌐 Browser
Chrome
### 🐛 Bug Description
Got this when trying to use ollama custom model:
```
{
"error": {
"headers": {
…
-
Hi, I am having trouble using OllamaEmbedding. I am unable to retrieve the correct vectors and the the similarity score is really high. I was able to get the correct vectors with OpenAIEmbedding but I…
-
the application is great, but unfortunately in Vietnam we cannot create Claude
-
I'm trying to set an output max tokens with llama index but it doesn't work. Can someone help me?
import pandas as pd
import os
from llama_index.llms.ollama import Ollama
from transformers impo…
-
When I run Ollama on my local PC with model gemma:2b I get a response.
My rest call works, below is a print screen:
![image](https://github.com/OpenDevin/OpenDevin/assets/19372922/307fbce0-9599-48…