Closed VMinB12 closed 4 weeks ago
Still failing, but some progress:
llm = ChatBedrock(
model="anthropic.claude-3-5-sonnet-20241022-v2:0",
temperature=0.0,
endpoint_url="https://bedrock.us-west-2.amazonaws.com/",
)
res = llm.invoke("What is 1+1?")
print(res.content)
no longer returns the error, but returns an empty response:
content='' additional_kwargs={'usage': {'prompt_tokens': 0, 'completion_tokens': 0, 'total_tokens': 0}, 'stop_reason': None, 'model_id': 'anthropic.claude-3-5-sonnet-20241022-v2:0'} response_metadata={'usage': {'prompt_tokens': 0, 'completion_tokens': 0, 'total_tokens': 0}, 'stop_reason': None, 'model_id': 'anthropic.claude-3-5-sonnet-20241022-v2:0'} id='run-...' usage_metadata={'input_tokens': 0, 'output_tokens': 0, 'total_tokens': 0}
What could cause this? There isn't even an error to dig into anymore.
Another update: We can change the model to a non-existing model, i.e. model="anthropic.claude-3-5-sonnet-20241022-v3:0"
and the response will be the same without error.
Any help is appreciated.
@VMinB12
The problem might be with the endpoint_url
you are using. Rather than that can you use region_name="us-west-2"
if you are trying to target a specific region.
@VMinB12
I believe this is the root cause of your error. You are using the bedrock
endpoint, while the invoke
API uses bedrock-runtime.
For example, this runs fine for me, notice the change in the endpoint_url
.
from langchain_aws import ChatBedrock
llm = ChatBedrock(
model="anthropic.claude-3-5-sonnet-20241022-v2:0",
endpoint_url="https://bedrock-runtime.us-west-2.amazonaws.com"
)
res = llm.invoke("What is 1+1?")
print(res.content)
# Output
1 + 1 = 2
But in general, you shouldn't feel the need specify the endpoint url, this should be taken care by the boto3 library. Reopen the issue, if you still encounter this problem.
I rather think that that the model is not available yet in Bedrock accounts such as my case. I was trying to follow the documentation and have the same error while trying to reach the anthropic.claude-3-5-sonnet-20241022-v2:0 model. As soon as you switch to V1 it would would work.
@lulzim-bulica @3coins Indeed, it seems to have been a server-side issue. Somehow the issue resolved itself overnight and now the code from my first comment works.
from langchain_aws import ChatBedrock
# llm = ChatBedrockConverse(model="anthropic.claude-3-5-sonnet-20241022-v2:0")
llm = ChatBedrock(
model_id="anthropic.claude-3-5-sonnet-20241022-v2:0",
model_kwargs={"temperature": 0.001},
region="us-west-2",
)
print(llm.invoke("hi!"))
and i am still retrieving the same error. I agree with @VMinB12 , this version is not stable yet.
import boto3
import json
# Initialize a session using your AWS credentials
session = boto3.Session()
# Create a Bedrock runtime client
bedrock = session.client('bedrock-runtime', region_name='us-east-1')
# Define the model ID and input message
model_id = 'mistral.mistral-large-2407-v1:0'
input_message = {
'messages': [
{
'role': 'user',
'content': 'Which LLM are you?'
}
]
}
try:
# Invoke the model
response = bedrock.invoke_model(
modelId=model_id,
body=json.dumps(input_message)
)
# Print the response body
print(json.dumps(json.loads(response['body']), indent=4))
except Exception as e:
print(f"An error occurred: {e}")
An error occurred: An error occurred (ValidationException) when calling the InvokeModel operation: The provided model identifier is invalid.
i'm facing a similar problem can someone help me here ??
I am trying to run the following code:
which throws this error:
I copied the model_id directly from Bedrock, so I don't think the issue is there. Any suggestions?