run-llama / llama-agents

MIT License
947 stars 69 forks source link

Custom Prompt for Orchestrator #80

Open ryann-sportsbet opened 6 days ago

ryann-sportsbet commented 6 days ago

Currently, we only have two option for Orchestrator that either QueryPipeline or AgentOrchestrator without custom prompt parameters.

Can we have something similar like prompt in FunctionCallingAgentWorker?

The reason I'm asking because I have build a multi-agents with FunctionCallingAgentWorker and it is getting better response (both accuracy and meaningful ) compare to llama-agents.

logan-markewich commented 6 days ago

You could likely migrate your existing multi agent system pretty easily by using the pipeline orchestrator, and then you have a distributed agent system 🙏

There are prompts for the agent orchestrator here, there are two main ones, plus the description of the "human", which is used for the final response https://github.com/run-llama/llama-agents/blob/50a617b1b6545a662160916c757b3598a9ecc61b/llama_agents/orchestrators/agent.py#L31

ryann-sportsbet commented 5 days ago

@logan-markewich

Thank you sir.

I've implemented your suggestion by modifying both the summary and follow-up prompts with some adjustments to the human description. However, the results remain unchanged. In some instances, it appears that no agent is being called from the available agents, and the response seems to be based solely on the its knowledge

pipeline_orchestrator = PipelineOrchestrator(pipeline) control_plan = ControlPlaneServer(message_queue=message_queue, orchestrator=AgentOrchestrator( llm=llm, followup_prompt=test_followup_prompt(), summarize_prompt=test_multi_agent_summary_prompt(), human_description=test_human_description_prompt() ))

logan-markewich commented 4 days ago

You might need to give more complete code to properly debug. I can't do much with the above