Closed kmcduffee-verisk closed 1 month ago
Voting for Prioritization
Volunteering to Work on This Issue
@kmcduffee-verisk Looking at your error you created the agent without any prompts then updated it to add one?
@drewtul It seems to happen either way.
@kmcduffee-verisk Does it happen on the first apply or the second, I can provoke an error by providing some overrides but not all and I've got a fix for this, but I'm not getting the same error as you.
@kmcduffee-verisk I can reproduce your error, but only by having a block:
resource "aws_bedrockagent_agent" "test" {
agent_name = "test"
agent_resource_role_arn = aws_iam_role.test.arn
prepare_agent = true
foundation_model = "anthropic.claude-3-sonnet-20240229-v1:0"
idle_session_ttl_in_seconds = 60
instruction = "Something here"
prompt_override_configuration {
}
}
Can you double check you don't have any typo on prompt_configurations
in your configuration?
@drewtul Thanks for digging in! Indeed, the issue appears to have been with my prompt template file. In the AWS UI, where the JSON worked fine as-is, there was a newline after the first quote of the content
block. Once I brought the text back up to the quote, it all worked.
I did run into an issue related to #37168 though. It appears that if you do not provide prompt configurations for every prompt (orchestration, pre, knowledge, post), you will get the error noted there. I was only able to get bast that issue doing the following, basically blanking out the other options (since my agent does not use them):
prompt_override_configuration {
prompt_configurations = [
{
base_prompt_template = "{\"anthropic_version\":\"bedrock-2023-05-31\",\"system\":\"\",\"messages\":[{\"role\":\"user\",\"content\":\"\"}]}"
inference_configuration = [
{
max_length = 2048
stop_sequences = ["\n\nHuman:"]
temperature = 0
top_k = 250
top_p = 1
}
]
parser_mode = "DEFAULT"
prompt_creation_mode = "OVERRIDDEN"
prompt_state = "DISABLED"
prompt_type = "PRE_PROCESSING"
},
{
base_prompt_template = "{\"anthropic_version\":\"bedrock-2023-05-31\",\"system\":\"\",\"messages\":[{\"role\":\"user\",\"content\":\"\"}]}"
inference_configuration = [
{
max_length = 2048
stop_sequences = ["\n\nHuman:"]
temperature = 0
top_k = 250
top_p = 1
}
]
parser_mode = "DEFAULT"
prompt_creation_mode = "OVERRIDDEN"
prompt_state = "DISABLED"
prompt_type = "ORCHESTRATION"
},
{
base_prompt_template = "You are not a knowledge base assistant."
inference_configuration = [
{
max_length = 2048
stop_sequences = ["\n\nHuman:"]
temperature = 0
top_k = 250
top_p = 1
}
]
parser_mode = "DEFAULT"
prompt_creation_mode = "OVERRIDDEN"
prompt_state = "DISABLED"
prompt_type = "KNOWLEDGE_BASE_RESPONSE_GENERATION"
},
{
base_prompt_template = file("${path.module}/prompt_templates/post_processing.json")
inference_configuration = [
{
max_length = 2048
stop_sequences = ["\n\nHuman:"]
temperature = 0
top_k = 250
top_p = 1
}
]
parser_mode = "DEFAULT"
prompt_creation_mode = "OVERRIDDEN"
prompt_state = "ENABLED"
prompt_type = "POST_PROCESSING"
}
]
}
[!WARNING] This issue has been closed, meaning that any additional comments are hard for our team to see. Please assume that the maintainers will not see them.
Ongoing conversations amongst community members are welcome, however, the issue will be locked after 30 days. Moving conversations to another venue, such as the AWS Provider forum, is recommended. If you have additional concerns, please open a new issue, referencing this one where needed.
@kmcduffee-verisk I did encounter the error from #37168 and I've raised a PR to fix that issue with some further tests covering adding/removing and partial prompt configuration.
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Terraform Core Version
1.8.5
AWS Provider Version
5.53.0
Affected Resource(s)
aws_bedrockagent_agent
Expected Behavior
Ability to apply the
aws_bedrockagent_agent
resource with customprompt_override_configuration
.Actual Behavior
Apply does not succeed. The following error appears on
terraform apply
:Relevant Error/Panic Output Snippet
No response
Terraform Configuration Files
Steps to Reproduce
terraform apply
with the includes resource definition. It will succeed to create the agent if theprompt_override_configuration
is left out.Debug Output
No response
Panic Output
No response
Important Factoids
No response
References
No response
Would you like to implement a fix?
No