langchain-ai / langchain-aws

Build LangChain Applications on AWS
MIT License
103 stars 81 forks source link

AWS Bedrock Service Raising Error: "Malformed input request: #: extraneous key [tools] is not permitted." #173

Open jwstanwick opened 7 months ago

jwstanwick commented 7 months ago

Checked other resources

Example Code

The following code raises a malformed input request based on extraneous key [tools] when querying the AWS Bedrock service.

def init_test_db(debug = False):
    # Initialize the database
    # Load environment variables from .env.local file
    load_dotenv(dotenv_path='.env.local')

    # Read PostgreSQL connection parameters from environment variables
    POSTGRES_DB = os.getenv("POSTGRES_DB")
    POSTGRES_USER = os.getenv("POSTGRES_USER")
    POSTGRES_PASSWORD = os.getenv("POSTGRES_PASSWORD")
    POSTGRES_HOST = os.getenv("POSTGRES_HOST")
    POSTGRES_PORT = os.getenv("POSTGRES_PORT")

    # Build the PostgreSQL connection URI
    postgres_uri = f"postgresql://{POSTGRES_USER}:{POSTGRES_PASSWORD}@{POSTGRES_HOST}:{POSTGRES_PORT}/{POSTGRES_DB}"
    db = SQLDatabase.from_uri(postgres_uri)
    if debug:
        print(db.table_info)
    return db

def init_llm(model = 'claude-v2.1'):
    # Create the LLM
    models = {
        'claude-v2.1': 'anthropic.claude-v2:1', 
        'claude-v3-sonnet': 'anthropic.claude-3-sonnet-20240229-v1:0',
        'mistral-large': "mistral.mistral-large-2402-v1:0",
        'claude-v2': 'anthropic.claude-v2',
        'jurassic-2-mid': 'ai21.j2-mid-v1',
        'jurassic-2-ultra': "ai21.j2-ultra-v1",
    }

    llm = Bedrock(model_id=models[model], streaming=False, region_name='us-east-1')
    return llm

llm = init_llm()
db=init_test_db()
agent_executer = create_sql_agent(llm=llm, db=db, agent_type="openai-tools", verbose=True)
agent_executer.invoke("Which customer has the highest outstanding loan value")

Error Message and Stack Trace (if applicable)

ValueError: Error raised by bedrock service: An error occurred (ValidationException) when calling the InvokeModelWithResponseStream operation: Malformed input request: #: extraneous key [tools] is not permitted, please reformat your input and try again.

Description

I am trying to use the langchain library to build an AWS bedrock powered SQL agent.

System Info

langchain==0.1.13 langchain-community==0.0.29 langchain-core==0.1.36 langchain-openai==0.1.1 langchain-text-splitters==0.0.1

Windows 11 Python 3.8.10

kamilcglr commented 6 months ago

Hello @jwstanwick, same issue here with mistral.mixtral-8x7b-instruct-v0:1. Did you find a solution?

MichalTodt commented 6 months ago

Hi, I am experiencing the same with anthropic.claude-3-haiku-20240307-v1:0

raulraja commented 6 months ago

Is this related to Bedrock not sending the header "anthropic-beta": "tools-2024-04-04" ?

https://docs.anthropic.com/claude/docs/tool-use

Please note that during the beta period:

Streaming (stream=true) is not yet supported. We plan to add streaming support in a future beta version. While the feature is production-ready, we may introduce multiple beta versions before the final release. Tool use is not yet available on third-party platforms like Vertex AI or AWS Bedrock, but is coming soon. See Legacy tool use for guidance on how to do tool use on Vertex AI and AWS Bedrock right now.

Adyansh-RCRM commented 4 months ago

Hi, I am facing the same issue with llama3 model. Did anyone get a solution for fixing this?

ccurme commented 2 months ago

Hello,

LangChain makes a distinction between text-in text-out LLMs and chat models, which work with messages that have distinct roles. Bedrock is an LLM and does not support tool calling.

I'd recommend trying with ChatBedrockConverse, if you aren't working with custom models. This interacts with AWS' Converse API, which AWS recommends for chat models.

Example:

from langchain_aws import ChatBedrockConverse
from langchain_core.tools import tool

@tool
def magic_function(input: int) -> int:
    """Call a magic function."""
    pass

llm = ChatBedrockConverse(
    model="meta.llama3-1-70b-instruct-v1:0",
).bind_tools([magic_function])

response = llm.invoke("What is the value of magic_function(3)?")
response.tool_calls