stanfordnlp / dspy

DSPy: The framework for programming—not prompting—language models
https://dspy.ai
MIT License
18.82k stars 1.44k forks source link

Using Different LMs in Complilation #1816

Closed kxzxvbk closed 2 hours ago

kxzxvbk commented 2 hours ago

Hi, I'm using dspy to better prompt llama3.1. However, I find that llama3 itself does not perform well in proposing new instructions, as this task is quite challenging. I'm wondering whether there is an elegant way to use different LLMs when compilation (e.g. use gpt to propose new prompts) Thanks :)

okhat commented 2 hours ago

Hey @kxzxvbk ! Yes definitely. Are you using MIPROv2?

optimization_model_kwargs = dict(prompt_model=gpt4o, task_model=llama, teacher_settings=dict(lm=gpt4o))
tp = dspy.MIPROv2(metric=YOUR_METRIC, auto="medium", num_threads=8, **optimization_model_kwargs)
optimized_react = tp.compile(YOUR_PROGRAM, trainset=trainset)
kxzxvbk commented 2 hours ago

@okhat That's great, thank you! Maybe this can be added to relevant docs.