-
I tried to run `docker compose run --rm --entrypoint="bash -c '[ -f scripts/setup ] && scripts/setup'" private-gpt`
In a compose file somewhat similar to the repo:
```
version: '3'
services:
…
-
What about custom/private LLMs. Will there be an option to use some of longchain local features like llama.cpp?
-
According to the [documentation](https://github.com/jmorganca/ollama/blob/main/docs/modelfile.md), the only way to change the SYSTEM is to create a new model with modelfile using an existing LLM model…
-
chatollama | Chat with knowledge base with id: 5
chatollama | Knowledge base 竺可桢浙大 with embedding "nomic-embed-text"
chatollama | Creating embeddings for Ollama served model: nomic-embed-text
c…
-
### What is the issue?
Upon running "ollama run gemma:2b" (though this happens for all tested models: llama3, phi, tinyllama), the loading animation appears and after ~5 minutes (estimate, untimed)…
-
### Bug Report
I tried to use GPT4All as a local LLM server with an OpenAI-like API for serving as a code copilot via the [continue](https://github.com/continuedev/continue) plugin for VSCode.
U…
-
I downloaded the https://huggingface.co/coreml-projects/Llama-2-7b-chat-coreml model and compiled the chat with xcode. When running the example prompt it takes around 15 minutes to complete. I am not …
-
### Description
Hello wonderful OpenCommit family! 🌈 Before diving in, I must express my admiration for the code owners and contributors of OpenCommit. You folks rock! 🎸 Now, I have another proposa…
-
### Before submitting your bug report
- [x] I believe this is a bug. I'll try to join the [Continue Discord](https://discord.gg/NWtdYexhMs) for questions
- [x] I'm not able to find an [open issue]…
-
Trying to run the augment toolkit on MacOs M3, with ollama (ollama run llama3) on the following config.yaml
PATH:
INPUT: "./raw_text_input"
OUTPUT: "./output"
DEFAULT_PROMPTS: "./prompts" # …