Open AndreCNF opened 1 month ago
Hey @AndreCNF I was able to recreate your issue. I am putting in a PR to fix it now :)
@reecelikesramen
Seems your PR is not merged yet.
I refer the exampe "https://github.com/MadcowD/ell/blob/main/examples/providers/ollama_ex.py"
Here is my test code:
"""
Ollama example.
"""
import ell
import ell.models
import ell.models.ollama
ell.init(store='./logdir', autocommit=True, verbose=True)
ell.models.ollama.register(base_url="http://localhost:11434/api")
@ell.simple(model="llama3.2:latest", temperature=0.1)
def write_a_story():
return "write me a story"
write_a_story
But after run it I don't get any output. Any helps for me?
Regarding base_url="http://localhost:11434/api
, I found it doesn't work with below url, please clarify as well
for your reference, the ollama api to generate the completion
curl http://localhost:11434/api/generate -d '{
"model": "llama3.2",
"prompt": "Why is the sky blue?"
}'
and this code looks weird:
https://github.com/MadcowD/ell/blob/main/src/ell/models/ollama.py#L30
response = requests.get(f"{base_url}/../api/tags")
@ozbillwang Here are the docs the v1 style link we're using. It's the endpoint for Ollama's OpenAI compatibility layer.
The standard for the base URL for an OpenAI API is to include the /v1/ route, so I will concede that response = requests.get(f"{base_url}/../api/tags")
looks a bit weird, but it accomplishes what is needed.
I can't get any version of my LMPs to have a commit message when trying to use Ollama for this. Note that I'm also using Ollama to run the LMP, so could it be that the local Ollama server is blocked by generating the answer to the LMP?
ell version:
0.0.14
ell settings:
How it looks like on ell studio: