-
so im getting an error all the time:
Traceback (most recent call last):
File "/home/orangepi/agent-zero/run_ui.py", line 30, in
from initialize import initialize
File "/home/orangepi/…
-
**Describe the bug**
I want to use local llms to evaluate my rag app, I have tried Ollama and HuggingFace models but neither of them is working.
Ragas version: 0.1.11
Python version: 3.11.3
**…
-
Hello,
I'm using the following script to fine tune the llama3 model with a custom dataset of questions & responses using the `{'prompt: "", completion:""}` format defined [here](https://github.com/…
-
### What is the issue?
When using the llm benchmark with ollama https://github.com/MinhNgyuen/llm-benchmark , I get around 80 t/s with gemma 2 2b. When asking the same questions to llama.cpp in conve…
-
Hi, I was recently trying VS code with the [Continue](https://continue.dev/) Plugin, configured to use my own OLLAMA server and LLMs (https://ollama.ai) and was amazed how well this works.
I'm no…
-
Please support [Zephyr 7B Gemma](https://huggingface.co/HuggingFaceH4/zephyr-7b-gemma-v0.1)! This [HG Chat](https://huggingface.co/spaces/HuggingFaceH4/zephyr-7b-gemma-chat) is a lot better than Zephy…
-
Aider version: 0.59.1
Python version: 3.10.12
Platform: Linux-5.15.0-122-generic-x86_64-with-glibc2.35
Python implementation: CPython
Virtual environment: No
OS: Linux 5.15.0-122-generic (64bit)
…
-
Need to Implement openai and huggingFace model inside the models folder which we have.
openai model file -
https://github.com/promptslab/PromptifyJs/blob/f9176cb7b703995470f4095233bf6660f8839093/m…
-
## Installation Method
I forked the latest official helm chart to support v0.3.22 and deployed Open-WebUI to my Kubernetes cluster.
## Environment
- **Open WebUI Version:** v0.3.22
**Conf…
-
Hi.
When I try to use a custom endpoint (https://github.com/xtekky/gpt4free/) for the OpenAI chat I get the error "Chat setup incomplete: The LLM endpoint is missing or not supported".