langchain-ai / langchain

🦜🔗 Build context-aware reasoning applications
https://python.langchain.com
MIT License
94.13k stars 15.21k forks source link

Agent how to call remote tool (exposed by langserve) #15094

Closed weiminw closed 7 months ago

weiminw commented 10 months ago

System Info

langchain 0.0.352

Who can help?

No response

Information

Related Components

Reproduction

I use langserve develop a chain , and expose as remote tool. my friend wants to call my chain in his agent, how to do it?

Joke chain:

#!/usr/bin/env python
from fastapi import FastAPI
from langchain.prompts import ChatPromptTemplate
from langchain.chat_models import ChatAnthropic, ChatOpenAI
from langserve import add_routes
llm = ChatOpenAI(
    openai_api_base=f"http://192.168.1.201:18001/v1",
    openai_api_key="EMPTY",
    model="gpt-3.5-turbo",
    temperature=0.5,
    top_p="0.3",
    default_headers={"x-heliumos-appId": "general-inference"},
    tiktoken_model_name="gpt-3.5-turbo",
    verbose=True,
)

app = FastAPI(
  title="LangChain Server",
  version="1.0",
  description="A simple api server using Langchain's Runnable interfaces",
)

prompt = ChatPromptTemplate.from_template("tell me a joke about {topic}")
add_routes(
    app,
    prompt | llm,
    path="/joke",
)

if __name__ == "__main__":
    import uvicorn

    uvicorn.run(app, host="0.0.0.0", port=8000)

Agent:

from langchain.agents import initialize_agent, AgentType

from langchain_community.chat_models import ChatOpenAI
from langserve import RemoteRunnable

from langchain.tools import Tool

llm = ChatOpenAI(
    openai_api_base=f"http://xxxx:xxx/v1",
    openai_api_key="EMPTY",
    model="gpt-3.5-turbo",
    temperature=0.5,
    top_p="0.3",
    tiktoken_model_name="gpt-3.5-turbo",
    verbose=True,
)

remote_tool = RemoteRunnable("http://xxx:xxx/joke/")
tools = [
    Tool.from_function(
        func=remote_tool.invoke,
        name="joke",
        description="用户要求讲笑话的时候使用该工具",
        # coroutine= ... <- you can specify an async method if desired as well
    ),
]
agent = initialize_agent(
    tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True
)

result = agent.run("讲一个关于坐出租车的笑话")
print(result)

agent always got error: because not there is not valid input to remote tool.

Expected behavior

no error

dosubot[bot] commented 10 months ago

🤖

你好,@weiminw!好久不见,希望一切都好。欢迎你再次使用LangChain。

It seems like the issue you're encountering is related to the input being passed to the remote tool. The RemoteRunnable class expects a specific format of input when its invoke method is called.

In your agent code, you're calling the invoke method without passing any input:

func=remote_tool.invoke,

This could be the reason why you're seeing an error about invalid input.

To fix this, you need to pass the appropriate input to the invoke method. The exact format of the input depends on how your remote tool (the joke chain) is implemented, but it will typically be a dictionary with specific keys.

Here's an example of how you might modify your code:

tools = [
    Tool.from_function(
        func=lambda input: remote_tool.invoke({"topic": input}),
        name="joke",
        description="用户要求讲笑话的时候使用该工具",
    ),
]

In this example, I'm assuming that your joke chain expects a dictionary with a "topic" key. You'll need to replace this with the actual expected input for your joke chain.

I hope this helps! If you're still encountering issues, could you please provide more details about the error message you're seeing, and the expected input format for your joke chain? This will help me provide more specific guidance.

Sources


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.