danielmiessler / fabric

fabric is an open-source framework for augmenting humans using AI. It provides a modular framework for solving specific problems using a crowdsourced set of AI prompts that can be used anywhere.
https://danielmiessler.com/p/fabric-origin-story
MIT License
22.67k stars 2.37k forks source link

[Question]: How to use ollama successfully? #373

Closed bjcbusiness closed 4 months ago

bjcbusiness commented 4 months ago

What is your question?

After reading the documentation, I am still not clear how to get local llama models working. I've had to cobble together an understanding based on reading github issues and older documentation. What I came to in the end is this:

~/.config/fabric/.env has

OPENAI_API_KEY=ollama
OPENAI_BASE_URL=http://127.0.0.1:11434/v1

ollama has only 1 model installed: mistral:instruct

I've tried numerous commands, including --remoteOllamaServer and they all lead to the same result:

Error: Client error '404 Not Found' for url 'http://127.0.0.1:11434/v1/models'

I've tried different BASE_URLs, like removing the v1 but it doesn't help. I've tried a command that requests a specific pattern and get the same result:

fabric --model mistral --pattern summarize --text "hello my name is ben" --remoteOllamaServer http://localhost:11434/v1
Error: Client error '404 Not Found' for url 'http://127.0.0.1:11434/v1/models'

I appreciate any help you can provide.

createchange commented 4 months ago

Hi @bjcbusiness. I was having issues myself - hopefully I can steer you right.

First things first, make sure you have the most recent version of ollama.

> ollama -v
ollama version is 0.1.32

I have no .env file configured - I commented out my OpenAI key.

I also do not specify a remote server - the --help text specifies that you should only do that when not running locally which you evidently are. the following: ONLY USE THIS if you are using a local ollama server in an non-deault location or port,.

This command worked for me:

> pbpaste | ~/.local/bin/fabric -m llama3:8b -p create_summary --stream
# IDENTITY and PURPOSE
As an expert content summarizer, I will condense the provided text into a Markdown-formatted summary.

# OUTPUT SECTIONS

## ONE SENTENCE SUMMARY:
The story concludes with Darrow's son Pax being born to him and Mustang, while the world is rebuilding from the aftermath of war and nuclear destruction.

...

Last thing that I'll note is that fabric --listmodels does not return a mistral result. I am unsure if fabric will automatically append :latest to the model name, but you may want to try specifying the latest tag to see if that clears up issues.

bjcbusiness commented 4 months ago

Wow! Who would have guessed "ignore everything, empty your .env and just refer to the local model." This did work, and I'm thankful for it.

Documentation could use some update here though.

createchange commented 4 months ago

Absolutely - I am glad to have helped! I actually misspoke on what the documentation says about using the remote ollama config, but nonetheless - glad you're able to make use!

JulioMoreyra commented 3 months ago

it works for me! thanks!

DmacMcgreg commented 2 months ago

Got it working.

  1. vim ~/.config/fabric/.env
  2. add the following: (Replacing the previous content that had just #No API key set.)
    OPENAI_API_KEY="NULL"
    OPENAI_BASE_URL=https://127.0.0.1:11434/v1/
    CLAUDE_API_KEY="NULL"
    GOOGLE_API_KEY="NULL"

Should work now. The last one that worked for me was making sure OPENAI_BASE_URL was https not http even if it is localhost...

Make sure to unset your local variables in case you 'exported' them in one of the earlier steps so that it uses the .env file. eg: unset OPENAI_BASE_URL

zanagraf commented 2 months ago

I actually have exact the same issue.

I tried this :

Got it working.

1. vim ~/.config/fabric/.env

2. add the following: (Replacing the previous content that had just #No API key set.)
OPENAI_API_KEY="NULL"
OPENAI_BASE_URL=https://127.0.0.1:11434/v1/
CLAUDE_API_KEY="NULL"
GOOGLE_API_KEY="NULL"

Should work now. The last one that worked for me was making sure OPENAI_BASE_URL was https not http even if it is localhost...

Make sure to unset your local variables in case you 'exported' them in one of the earlier steps so that it uses the .env file. eg: unset OPENAI_BASE_URL

I add DEFAULT_MODEL="mistral" as i saw in an other ticket it may work... but... no.

I add the port to the firewall.

If it hit the url (without "s" ^^) in firefox i got "Ollama is running" so i do not understand ^^

thanks before all !

Im0 commented 2 months ago

I'm running the ollama docker container version 0.1.48 and getting the same error: Error: Client error '404 Not Found' for url 'http://127.0.0.1:11434/v1/models'

I'm a little confused about how other have this working with ollama 0.1.48 and below as the models endpoint only appears to have been committed 6 days ago (here https://github.com/ollama/ollama/commit/996bb1b85e0c1b3ae64246a50ea412dc2a2e30d8 ) and maybe in 0.1.49 rc's. How did people have this working before now? I must be missing something.

zanagraf commented 2 months ago

I got the same error and i did that but not in container.

I start by opening my firewall port.

Then i fill the .env file (see my previous post) then i finally empty it... And resend the environment variable. (As i use mistral my OPENAI_API_KEY is NULL) And i set OPENAI_BASE_URL=http://127.0.0.1:11434/v1/ as mentioned in fabric instruction (i did first but i try many things then i came back...)

zanagraf commented 2 months ago

I got the same error and i did that but not in container.

I start by opening my firewall port.

Then i fill the .env file (see my previous post) then i finally empty it... And resend the environment variable. (As i use mistral my OPENAI_API_KEY is NULL) And i set OPENAI_BASE_URL=http://127.0.0.1:11434/v1/ as mentioned in fabric instruction (i did first but i try many things then i came back...)

DmacMcgreg commented 2 months ago

Follow carefully my instructions above, you're missing the 's' in https. For me and others, that's what got it working.

zanagraf commented 2 months ago

Thanks for the reply but i actually not set a ssh or certificate (fully inner home use) the https don't work. But yet i have a full fabric ollama mistral functionnal!

Im0 commented 2 months ago

Follow carefully my instructions above, you're missing the 's' in https. For me and others, that's what got it working.

Thanks for your reply mate. I like @zanagraf don't have https enabled in front of it. Trying HTTPS gives ERR_SSL_PROTOCOL_ERROR. Thanks though.

mario-sotil commented 2 months ago

FWIW, I used @createchange's comment as reference (https://github.com/danielmiessler/fabric/issues/373#issuecomment-2087998667), and found that setting up DEFAULT_MODEL was enough to make it work with Ollama.

$ fabric --listmodels
GPT Models:

Local Models:
llama3:latest

Claude Models:

Google Models:

$ export DEFAULT_MODEL="llama3:latest"
$ echo "There was a little house on the praire" | fabric -p create_summary --stream
# OUTPUT SECTIONS

## ONE SENTENCE SUMMARY:
Laura Ingalls Wilder's Little House series is a classic tale of pioneering life on the American frontier, exploring themes of resilience and growth.

## MAIN POINTS:

1. The Little House series is based on Laura Ingalls Wilder's childhood experiences growing up in a pioneer family.
2. The stories follow the Ingalls family as they travel across the American frontier, facing challenges and adventures along the way.
[...]