run-llama / llama_index

LlamaIndex is a data framework for your LLM applications
https://docs.llamaindex.ai
MIT License
35.38k stars 4.98k forks source link

[Bug]: llama_index with Bedrock: Anthropic Claude Prompt issue #15587

Closed jayanthma closed 1 week ago

jayanthma commented 3 weeks ago

Bug Description

llm = Bedrock(model="anthropic.claude-3-sonnet-20240229-v1:0") messages = [ ChatMessage(role="system", content="You are a pirate with a colorful personality"), ChatMessage(role="user", content="Tell me a story"), ] llm.chat(messages=[ ChatMessage(role="system", content="You are a pirate with a colorful personality"), ChatMessage(role="user", content="Tell me a story"), ])

The above code is throwing the below error File "C:\Users\Jay\OneDrive\coding_projects\RAG_AWS_Neptune\venv\Lib\site-packages\llama_index\llms\bedrock\utils.py", line 157, in get_request_body if len(prompt) > 0 and prompt[0]["role"] == "system":


TypeError: string indices must be integers, not 'str'

This happens for both 'chat' and 'complete' method

### Version

0.10.68

### Steps to Reproduce

Install the libraries:
pip install llama-index-llms-bedrock
pip install llama-index
-------------------------------------------------------------------------------------------------------------------------------------------
from   llama_index.llms.bedrock import Bedrock

llm = Bedrock(model="anthropic.claude-3-sonnet-20240229-v1:0", region_name="ap-south-1")
messages = [
    ChatMessage(role="system", content="You are a pirate with a colorful personality"),
    ChatMessage(role="user", content="Tell me a story"),
]
response=llm.chat(messages=[
    ChatMessage(role="system", content="You are a pirate with a colorful personality"),
    ChatMessage(role="user", content="Tell me a story"),
])

### Relevant Logs/Tracbacks

_No response_
dosubot[bot] commented 3 weeks ago

Hey, @jayanthma! I'm here to help you with any bugs, questions, or contributions while you wait for a human maintainer. Let's tackle this together!

The error you're encountering is due to the prompt parameter being expected as a string, but the code is attempting to access it as if it were a list of dictionaries. This mismatch causes the TypeError.

In the get_request_body function within llama_index.llms.bedrock.utils, the prompt parameter should be a string. For the anthropic.claude model, the prompt should be formatted as follows:

"\n\nHuman: {user_message}\n\nAssistant: "

Here's how you can modify your code to ensure the prompt is correctly formatted:

from llama_index.llms.bedrock import Bedrock
from llama_index.core.base.llms.types import ChatMessage

llm = Bedrock(model="anthropic.claude-3-sonnet-20240229-v1:0", region_name="ap-south-1")
messages = [
    ChatMessage(role="system", content="You are a pirate with a colorful personality"),
    ChatMessage(role="user", content="Tell me a story"),
]
response = llm.chat(messages=messages)

Ensure that the ChatMessage objects are correctly passed to the llm.chat method. The prompt will be internally formatted by the get_request_body function to match the expected structure for the anthropic.claude model [1].

To continue talking to Dosu, mention @dosu.

jayanthma commented 3 weeks ago

I didn't use the kwargs here

@jayanthma what happens if you do it without the kwarg? llm.chat(messages) ?

logan-markewich commented 3 weeks ago

Seems like there's a bug in the LLM class tbh. The get_request_body function expects a string, but the anthropic version of it for some reason is written to expect a list of chat message dicts

I would suggest just using the bedrock-converse llm class, since it has support for function calling as well, as a workaround.

Not sure when I can get to this, but if you are up for it, I recommend a PR. I think we need some way to decide if get_request_body should be called on a string or on a list of chat messages ? Or something?

LHFO94 commented 2 weeks ago

Can confirm this this an issue and I am running into it as well. Looked into changing into the bedrock-converse llm class but this is missing the _client attribute. Downgrading to llama-index==0.10.30 resolved the issue.

csotomon commented 2 weeks ago

Here is the fixed code: https://github.com/run-llama/llama_index/pull/15729/files