stanfordnlp / dspy

DSPy: The framework for programming—not prompting—foundation models
https://dspy-docs.vercel.app/
MIT License
17.44k stars 1.33k forks source link

Documentation oddity: Signature Optimizer -- trainset vs. devset #940

Closed rpgoldman closed 1 week ago

rpgoldman commented 5 months ago

In the SignatureOptimizer doc page the example begins by splitting the data set into a trainset and a devset.

This is odd because the trainset is never used, and later the teleprompter is shown using the devset as the value of the trainset= keyword argument!

compiled_prompt_opt = teleprompter.compile(cot, trainset=devset, eval_kwargs=kwargs)

This is odd enough that it really should have some explanation.

(If someone has an answer, but not the time to put it into the documentation, I could do that myself; I just don't know why this is done this way.)

superkaiy commented 4 months ago

@okhat I used local model llama3-70b-instruct I wonder if anyone will be able to optimize the instructions or signatures using COPRO,just run the example codes from the doc page. I can not see any differences from the results between baseline and the optimized program(COPRO)