-
when i run` npm run dev` will output message `11/10/2024, 2:12:29 PM [CONVEX A(aiTown/agentOperations:agentGenerateMessage)] Uncaught Error: Request to http://localhost:11434/api/embeddings forbidden`…
-
### Bug Description
from llama_index.llms.ollama import Ollama
llm = Ollama(model="llama2", request_timeout=60.0)
response = llm.complete("What is the capital of France?")
print(response)
A…
-
**Is your feature request related to a problem? Please describe.**
would want to reuse the models that I already have downloaded on ollama
**Describe the solution you'd like**
being able to use m…
-
Hi,
I have two requests 😅:
1. Might you publish a docker image?
2. Is it able to use ollama api instead of openAI?
Thanks, great and amazing project!!!!
-
在def Intent_Recognition(query,choice)调用ollama.generate()时候报错
```
rec_result = ollama.generate(model=choice, prompt=prompt)['response']
```
File "C:\Users\insnood\AppData\Roaming\Python\Python31…
-
### Description
When using the command crewai create crew projectname with crewai 0.79.4 it will configure the .env with the information but when you will run the project you will get a missing key_n…
-
Hi this docker image has been working great for me and of course it's totally valid to want to manage your own software, but I just wanted to point out that there is an upstream docker image that ship…
-
Hi,
I'm trying to deploy some models using the operator, but I'm facing some naming issues if these models contain dots or invalid caracters for kubernetes spec.metadata.name. Some examples:
####…
-
I want to use qwen2.5:3b as a LLM judgment.
I read the docs but I still stuck with this
This is my example code
```
from opik.evaluation.metrics import Hallucination
metric = Hallucination…
-
### Description
Hi, I think you are calling the wrong endpoint for local embedding for ollama, if I use settings from your instructions [here](https://github.com/Cinnamon/kotaemon/blob/main/docs/loca…