Open brainlid opened 8 months ago
I've spent some time building a graph-style agent framework with executor on top of this langchain library. Let me know if its something you'd like a demo on and we could possibly discuss how to integrate it
I'm definitely interested in what you've built.
What do you mean by a graph style agent? I've researched the way the TS LangChain library does agents but I wouldn't describe it as graph related.
So, what I've been working on is more akin to the LangGraph libraries
For example, in my work, a simple agent that might do some self-reflection to write an essay is defined like this:
%GraphAgent{
name: "Essay Writer",
final_output_property: :latest_revision,
state: %EssayWriterState{}
}
|> GraphAgent.add_node(:first_draft, write_first_draft_node)
|> GraphAgent.add_node(:write, write_node)
|> GraphAgent.add_node(:provide_feedback, feedback_node)
|> GraphAgent.set_entry_point(:first_draft)
|> GraphAgent.add_edge(:first_draft, :provide_feedback)
|> GraphAgent.add_edge(:provide_feedback, :write)
|> GraphAgent.add_conditional_edges(:write, [:end, :provide_feedback], should_continue)
Currently, each node is a function that expects a chain in one argument and the input state. Each node returns the new output state.
This style of agent implementation feels really nice and natural in Elixir!
I have started porting Microsoft's Autogen to Elixir here. The goal will be to stay close to Autogen's conversation patterns, such as resumable GroupChats and Nested chats.
Here's a sample conversation between two agents:
joe = %XAgent{
name: "Joe",
system_message: "Your name is Joe and you are a part of a duo of comedians.",
type: :conversable_agent,
llm_config: %{temperature: 0.9},
human_input_mode: "NEVER",
max_consecutive_auto_reply: 1,
is_termination_msg: fn msg -> String.contains?(String.downcase(msg.content), "bye") end
}
cathy = %XAgent{
name: "Cathy",
system_message: "Your name is Cathy and you are a part of a duo of comedians.",
type: :conversable_agent,
llm_config: %{temperature: 0.7},
human_input_mode: "NEVER"
}
XAgent.initiate_chat(
from_agent: joe,
to_agent: cathy,
message: "Cathy, tell me a joke and then say the words GOOD BYE..",
max_turns: 2
)
I think an agent should behave similary to a genserver or liveview:
You would initialize it with some state variables and messages list and then have implemented handle_*
type callbacks on chain completed event that would receive the response and previous state and set up the state for next chain. Something like:
def handle_chain_success(:chain_x, %{response: response}, state)
state = assign(state, :chain_x_output, response.content)
state = assign(state, :messages, state.messages ++ [Message.new_human!(PromptTemplate("the response was <%= chain_x_output %>", state)])
{:reply, state}
end
I have now started using langchain
in my multi-agent library autogen.
One limitation I ran into was inability to keep track of sender and recipient agents for each message. I am currently using the name
field for tracking the sender agent, but there's no place for recipient. It would be great if LangChain.Message
provides a placeholder map for metadata like this. This should be helpful for all agent frameworks that want to use this library.
@brainlid What's the expected 0.3.0 release timeline? The last RC was back in June. If the APIs are reasonably stable, perhaps you can cut a second RC release now to build agents on top of?
Yes, I need to cut a new RC release. There's one more breaking change I want to get in before v0.3. At the moment I've very focused on launching a new business/service/marketing so I've gotten behind a bit here.
Agents can act autonomously and perform more complex, multi-step activities.