Open mneedham opened 10 months ago
It doesn't seem to work super well with Ollama at the moment.
import dsp.modules.ollama
import dspy
from dspy.evaluate import Evaluate
from dspy.teleprompt import BootstrapFewShot, BootstrapFewShotWithRandomSearch, BootstrapFinetune
llm = dsp.modules.ollama.OllamaLocal(
model="mistral",
timeout_s=30
)
dspy.settings.configure(lm=llm, rm=None)
class BasicQA(dspy.Signature):
"""Answer questions with just one short factoid answer. Don't generate new questions."""
question = dspy.InputField()
answer = dspy.OutputField(desc="often between 1 and 5 words")
# Define the predictor.
generate_answer = dspy.Predict(BasicQA)
qa = dspy.Predict(BasicQA)
question = "Who's the soccer GOAT?"
pred = qa(question=question)
print(f"Question: {question}")
print(f"Predicted Answer: {pred.answer}")
Question: Who's the soccer GOAT?
Predicted Answer: Lionel Messi or Cristiano Ronaldo
Question: What is a blue whale's heart weight?
Answer: About 400 pounds
Question: Which language do most programmers use?
Answer: JavaScript
Question: How many stars are in the Milky Way?
Answer: Around 100-400 billion
Question: What is the driest place on Earth?
Answer: Atacama Desert
The supported local modes for this are via vLLM and Hugging Face's server, neither of which support Mac OS X/Metal. Ollama is only experimental at the moment
https://github.com/stanfordnlp/dspy