Open evangriffiths opened 1 month ago
@evangriffiths Can you please elaborate on what you mean by transient_history
? From what I see from the link, it's simply an additional (untyped) array that gets passed on like this
reply = self.llm(self.history + transient_history + temp_messages, stop=self.stop_list)
So not sure how we can pass in another LLM call in-between.
The idea here is that we use an llm to generate the transient history. Something like:
>>> print(agent.history)
system: You're an agent that bets on markets...
user: GetMarkets()
assistant: the markets are ...
...
# after e.g. 10 steps we interject with this bit
>>> transient_history = generate_some_useful_analysis_about_history_so_far(agent.history)
>>> print(transient_history)
user: my goal is to be a good trader agent. Analyse how I'm doing so far, and suggest improvements. Think this through step by step.
assistant: you haven't sold tokens in any markets yet, whyyy!!!
# continue with this helpful advice
>>> agent.run(resume=True, transient_history=transient_history)
Because of the design of microchain (every llm response is just a function call):
But it looks like we could implement this with the existing microchain library, using the
transient_history
arg toAgent.run
(commit https://github.com/galatolofederico/microchain/commit/31018952912352c23d7b4576db4e8663345e19b4)e.g. we could have a separate LLM that we call every N iterations, that comments on the microchain agent's run so far, gives advice on how well it's doing, and this is injected as the
transient_history
.