stanfordnlp / dspy

DSPy: The framework for programming—not prompting—foundation models
https://dspy-docs.vercel.app/
MIT License
17.35k stars 1.33k forks source link

A better history for more functionalities #1563

Open Anindyadeep opened 6 hours ago

Anindyadeep commented 6 hours ago

While I was working on this issue #1073, I realized that history right now is super simple (a simple list) attached to LMs during the time of instantiation. However it could be way more than that. We can utilise history in lot different ways, if it was a separate module. For example some of the use cases would be:

  1. to inspect the chat
  2. filter on some specific filters
  3. build conversation prompts or utilize the history to something else
  4. serialize it for api making purposes
  5. caching the history and may be re-utilizing the history either to continue conversation or attach to different LMs.

Here is an usage based example:

lm = LM()

# after some conversation

lm.history.inspect()

# to filter

lm.history.filter(by="some_datetime")

# serialize

lm.history.to_json()

# cache and re-use

lm.history.save("some_history.json")

new_lm = LM()
new_lm.history = LM.load_history("some_history.json")

# build prompt

lm(lm.history.build_prompt(
    cutoff=3, query="some contextual query"
))

Let me what do you think. If things goes well, will add a PR.

okhat commented 3 hours ago

Thanks a lot @Anindyadeep ! This is a very interesting vision. I like that it's a list, because people can load that list and do anything they want with it, instead of learning a new sub-library for filtering/saving/querying the history. What do you think?

It's our job to save everything they may need to that list, but they can figure out how to query it later.