Closed jovicon closed 1 week ago
Hi, thanks for submitting this issue. At the moment, we only support the Ollama endpoint http://localhost:11434/v1
so if you have it running elsewhere this would cause the error.
Is it possible to take this as an issue with a Merge request and add a custom field to add the local endpoint?
@jovicon we've been migrating these runbook prompts to allow you to specify the url in the prompt. So you'll be able to specify the location of the OpenAI compatible endpoint when you run the prompt. It's a bit of a different flow than what we used previously but it will be more open to edits. I'll keep this issue open until we have a doc describing how you can try this.
Closing this one! Please feel free to checkout the main repo with new docs at https://github.com/docker/labs-ai-tools-for-devs. Support llama with the following front-matter
---
url: http://llama-endpoint.local/
model: llama3.2
---
# prompt system
Prompt content
Hi, I tried make runbook v0.0.10
and I had this problem:
This is my docker file:
and this is my extension config: