run-llama / llama_index

LlamaIndex is a data framework for your LLM applications
https://docs.llamaindex.ai
MIT License
36.84k stars 5.28k forks source link

Simple agent without any api key #16294

Open atodhruv opened 1 month ago

atodhruv commented 1 month ago

Question Validation

Question

How can I create simple agent for chat without using any api key. I need to use multiple agents.

dosubot[bot] commented 1 month ago

Hey @atodhruv! I'm here to help you with any bugs, questions, or contributions while you wait for a human maintainer. Let's tackle this together!

To create a simple chat agent without using any API key and to use multiple agents, you can use the create_llama_agent and create_llama_chat_agent functions provided in the llama-index library. Here is an example of how you can do this:

from llama_index.core.bridge.langchain import BaseLLM
from llama_index.core.langchain_helpers.agents.toolkits import LlamaToolkit
from llama_index.core.langchain_helpers.agents.agents import create_llama_agent, create_llama_chat_agent

# Assuming you have a LlamaToolkit and BaseLLM instance
toolkit = LlamaToolkit()
llm = BaseLLM()

# Create a simple agent
simple_agent = create_llama_agent(toolkit, llm)

# Create a chat agent
chat_agent = create_llama_chat_agent(toolkit, llm)

# Use the agents as needed
# For example, you can execute a task with the simple agent
result_simple = simple_agent.execute("Your task here")

# And execute a chat task with the chat agent
result_chat = chat_agent.execute("Your chat task here")

print(result_simple)
print(result_chat)

This code demonstrates how to create and use multiple agents (simple_agent and chat_agent) without requiring an API key. The create_llama_agent function allows you to specify different agent types, while create_llama_chat_agent is specifically for creating chat agents. The create_llama_chat_agent function defaults to using the CONVERSATIONAL_REACT_DESCRIPTION agent type for chat interactions [1][2].

To continue talking to Dosu, mention @dosu.

atodhruv commented 1 month ago

@dosu Thanks for response. Will it works with RAG query tools or functional tools? It will more helpful if possible to share small snippet of these.

Thanks in advance.

dosubot[bot] commented 1 month ago

Yes, the create_llama_agent and create_llama_chat_agent functions can work with RAG query tools or functional tools. They utilize the LlamaToolkit to get tools and initialize agents. Here's a small code snippet:

from llama_index.legacy.langchain_helpers.agents.toolkits import LlamaToolkit
from llama_index.legacy.bridge.langchain import BaseLLM
from llama_index.legacy.langchain_helpers.agents.agents import create_llama_agent, create_llama_chat_agent

# Assume `my_toolkit` is an instance of LlamaToolkit and `my_llm` is an instance of BaseLLM
agent_executor = create_llama_agent(
    toolkit=my_toolkit,
    llm=my_llm,
    agent='zero-shot-react-description'
)

chat_agent_executor = create_llama_chat_agent(
    toolkit=my_toolkit,
    llm=my_llm
)

# Use the agents as needed
result_simple = agent_executor.execute("Your task here")
result_chat = chat_agent_executor.execute("Your chat task here")

print(result_simple)
print(result_chat)

These functions wrap around initialize_agent, allowing them to work with various tools provided by LlamaToolkit [1][2].

To continue talking to Dosu, mention @dosu.

atodhruv commented 1 month ago

@Dosu Thanks, Previously I used learn Typescript version of llamaindex. I am new to llamaindex and python too, so confused how can I integrate Toolkit and prompt to agent and how can I make multi agents to communicate each other.

dosubot[bot] commented 1 month ago

To integrate the Toolkit and prompt into an agent and enable communication between multiple agents using LlamaIndex in Python, you can use the create_llama_agent function. Here's a simple example:

from llama_index.legacy.langchain_helpers.agents.toolkits import LlamaToolkit
from llama_index.legacy.langchain_helpers.agents.agents import create_llama_agent
from llama_index.legacy.bridge.langchain import BaseLLM

# Initialize your toolkit and language model
toolkit = LlamaToolkit()
llm = BaseLLM()

# Create an agent
agent_executor = create_llama_agent(
    toolkit=toolkit,
    llm=llm,
    agent='zero-shot-react-description'
)

# Use the agent_executor to enable communication between agents
# This part would depend on your specific use case and how you want agents to interact

For communication between multiple agents, you can define tasks or messages that agents can handle and pass these between agents using the execute method. This requires additional logic based on your application's needs [1][2][3].

To continue talking to Dosu, mention @dosu.