Closed rovo79 closed 9 months ago
For using it with Ollama, install the python library of ollama instead of openai See https://github.com/ollama/ollama-python
Then just comment out or remove lines 29 through 34 in file https://github.com/danielmiessler/fabric/blob/main/infrastructure/server/fabric_api_server.py
These are
# Load your OpenAI API key from a file
with open("openai.key", "r") as key_file:
openai.api_key = key_file.read().strip()
## Define our own client
client = openai.OpenAI(api_key = openai.api_key)
And also exchange lines 124 through 132 which are
response = openai.chat.completions.create(
model="gpt-4-1106-preview",
messages=messages,
temperature=0.0,
top_p=1,
frequency_penalty=0.1,
presence_penalty=0.1,
)
assistant_message = response.choices[0].message.content
with
response = ollama.chat(model='mistral', messages=messages)
assistant_message = response['message']['content'].strip()
Download and use the Ollama Model of your choosing instead of mistral...
It should be considered to have some kind of configuration as of what model should be chosen for what specific task. Maybe by introducing an additional MD file such as "Model.md". Regarding Ollama it whould then be possible to further instruct the respective model in doing whatever before solving the task.
Yes, Chris is totally correct here. And we're looking to make this more built-in and easy in the project. For now you can use techniques like Chris mentioned above.
Is the Mill only meant to interact with OpenAI or is it possible to setup a Mill with a local LLM via something like LMStudio/Ollama/Langchain?
Infrastructure code seems to only indicate openAI connections. Maybe this just meant as a starting example...