-
### Describe the issue as clearly as possible:
I'm trying to use Ollama with the Openai api to generate a Pydantic object.
I think that's because Ollama doesn't support `response_format` which is …
-
Hi Team,
I am already using LMStudio and OLLAMA for model deplyments. Given this model is LMCPP compatible and uses that. How can this model be deplyment, hosted and used with LMStudio or OLLAMA. It …
-
I am getting an error when I run this in the terminal showing
Error: unknown flag: --device
I am still trying to figure out if this is running correctly and probably need to install a web-ui int…
-
I happened to have ollama running while setting up smartcat. It does not have the phi3 model installed. Here is the output from running `sc` for the first time:
```
❯ sc
Prompt config file not foun…
-
Generative AI could be integrated into Recipya for the following tasks:
- Categorize `uncategorized` recipes based on the recipe itself.
- More ideas to come...
Integrate the Ollama API because i…
-
### 🚀 The feature, motivation and pitch
Ollama has the docker image [ollama/ollama:rocm](https://hub.docker.com/layers/ollama/ollama/rocm/images/sha256-2368286e0fca3b4f56e017a9aa4809408d8a8c6596e3cbd…
-
The API documentation (https://github.com/ollama/ollama/blob/main/docs/api.md#generate-a-chat-completion) refers to https://github.com/ollama/ollama/blob/main/docs/modelfile.md#valid-parameters-and-va…
-
Hi,
I wanted to give this a try and installed ollama locally. I am able to use the ollama API on http://localhost:11434/api/generate with curl.
I evaluated `export OLLAMA_API_BASE=http://localhost:…
-
### Pre-check
- [X] I have searched the existing issues and none cover this bug.
### Description
Following the Quickstart documentation provided [here](https://docs.privategpt.dev/quickstart/gettin…
-
**Is your feature request related to a problem? Please describe:**
Hi, I modified `.env.local` with:
```bash
# You only need this environment variable set if you want to use oLLAMA models
#EXAMPLE…