stanfordnlp / dspy

DSPy: The framework for programming—not prompting—foundation models
https://dspy-docs.vercel.app/
MIT License
17.99k stars 1.37k forks source link

Preserving Tracebacks #1665

Open dgonier opened 4 hours ago

dgonier commented 4 hours ago

There are a few places where getting to root errors is difficult because of the retry attempts and how errors are being preserved. I noticed for example that it was easier to go in and remove the try except blocks to debug the code than to keep them there. Im wondering if in the next version this could be worked on a bit. For example, maybe a inspect_errors() method? that shows the exact errors and full tracebacks?

Thanks! -Devin

okhat commented 3 hours ago

Hey @dgonier ! I agree with this sentiment. Have you tried dspy.Evaluate's traceback parameter? I forgot the exact name. Does it lack some things?

dgonier commented 2 hours ago

Hi,

I have used it based on docs for evaluating compiled vs non compiled programs, but didn't know it could be used during optimization for tracebacks. I ran into a bunch of issues trying to do bootstrap few shot with lists of pydantic types. I know that was a part of the latest commit. After digging around I eventually discovered a bug in my code that was immediately apparent when removing try except logic.

In this particular case the problem appeared to be with output parsing but was actually an issue with inputs on my end. Is there a different way I should be using evaluate that I am not? If so can you point me to a doc in particular?

I love DSpy and have become a strong advocate of it. Also it sounds like you know Zach B. who is working on the adversarial tree of thoughts. I am working with him on this as well. There are a few other interesting concepts of late related to planning that would be fun to discuss when they are more well developed as well.

I would also suggest some more docs on the new litellm setup for custom models. I had to write my own implementation and register it, because our setup is pretty customized to local kubernetes setup, not an API like openai. I found litellm docs less than helpful there. Happy to share setup if it helps.

Best, Devin