Open acwwat opened 2 months ago
Voting for Prioritization
Volunteering to Work on This Issue
Another use case is that I tried to provide the prompt_configurations
block for all four prompt types, but the configuration also fails to apply. The problem is that the API expects some arguments to be omitted in a prompt_configurations
block with default settings. If I provide the following configuration (all but ORCHESTRATION
has default settings) :
resource "aws_bedrockagent_agent" "forex_asst" {
agent_name = "ForexAssistant"
agent_resource_role_arn = aws_iam_role.bedrock_agent_forex_asst.arn
description = "An assisant that provides forex rate information."
foundation_model = data.aws_bedrock_foundation_model.this.model_id
instruction = "You are an assistant that looks up today's currency exchange rates. A user may ask you what the currency exchange rate is for one currency to another. They may provide either the currency name or the three-letter currency code. If they give you a name, you may first need to first look up the currency code by its name."
prompt_override_configuration {
prompt_configurations {
base_prompt_template = file("${path.module}/prompt_templates/pre_processing.txt")
parser_mode = "DEFAULT"
prompt_creation_mode = "DEFAULT"
prompt_state = "DISABLED"
prompt_type = "PRE_PROCESSING"
inference_configuration {
max_length = 2048
stop_sequences = [
"\n\nHuman:"
]
temperature = 0
top_k = 250
top_p = 1
}
}
prompt_configurations {
base_prompt_template = file("${path.module}/prompt_templates/orchestration.txt")
parser_mode = "DEFAULT"
prompt_creation_mode = "OVERRIDDEN"
prompt_state = "ENABLED"
prompt_type = "ORCHESTRATION"
inference_configuration {
max_length = 2048
stop_sequences = [
"$invoke$",
"$answer$",
"$error$"
]
temperature = 0
top_k = 250
top_p = 1
}
}
prompt_configurations {
base_prompt_template = file("${path.module}/prompt_templates/kb_resp_gen.txt")
parser_mode = "DEFAULT"
prompt_creation_mode = "DEFAULT"
prompt_state = "DISABLED"
prompt_type = "KNOWLEDGE_BASE_RESPONSE_GENERATION"
inference_configuration {
max_length = 2048
stop_sequences = [
"\n\nHuman:"
]
temperature = 0
top_k = 250
top_p = 1
}
}
prompt_configurations {
base_prompt_template = file("${path.module}/prompt_templates/post_processing.txt")
parser_mode = "DEFAULT"
prompt_creation_mode = "DEFAULT"
prompt_state = "DISABLED"
prompt_type = "POST_PROCESSING"
inference_configuration {
max_length = 2048
stop_sequences = [
"$invoke$",
"$answer$",
"$error$"
]
temperature = 0
top_k = 250
top_p = 1
}
}
}
}
I get the following validation error:
│ operation error Bedrock Agent: CreateAgent, https response error StatusCode: 400, RequestID: 9409d5c8-be89-4983-a0e3-410178033863, ValidationException:
│ BasePromptTemplate is incompatible with prompt type: PRE_PROCESSING when promptCreationMode is DEFAULT. Remove BasePromptTemplate and retry your
│ request.;InferenceConfiguration is incompatible with prompt type: PRE_PROCESSING when promptCreationMode is DEFAULT. Remove InferenceConfiguration and retry
│ your request.;PromptState is incompatible with prompt type: PRE_PROCESSING when promptCreationMode is DEFAULT. Remove PromptState and retry your
│ request.;BasePromptTemplate is incompatible with prompt type: KNOWLEDGE_BASE_RESPONSE_GENERATION when promptCreationMode is DEFAULT. Remove
│ BasePromptTemplate and retry your request.;InferenceConfiguration is incompatible with prompt type: KNOWLEDGE_BASE_RESPONSE_GENERATION when
│ promptCreationMode is DEFAULT. Remove InferenceConfiguration and retry your request.;PromptState is incompatible with prompt type:
│ KNOWLEDGE_BASE_RESPONSE_GENERATION when promptCreationMode is DEFAULT. Remove PromptState and retry your request.;BasePromptTemplate is incompatible with
│ prompt type: POST_PROCESSING when promptCreationMode is DEFAULT. Remove BasePromptTemplate and retry your request.;InferenceConfiguration is incompatible
│ with prompt type: POST_PROCESSING when promptCreationMode is DEFAULT. Remove InferenceConfiguration and retry your request.;PromptState is incompatible with
│ prompt type: POST_PROCESSING when promptCreationMode is DEFAULT. Remove PromptState and retry your request.
After I fixed these validation issues in the configuration like so:
resource "aws_bedrockagent_agent" "forex_asst" {
agent_name = "ForexAssistant"
agent_resource_role_arn = aws_iam_role.bedrock_agent_forex_asst.arn
description = "An assisant that provides forex rate information."
foundation_model = data.aws_bedrock_foundation_model.this.model_id
instruction = "You are an assistant that looks up today's currency exchange rates. A user may ask you what the currency exchange rate is for one currency to another. They may provide either the currency name or the three-letter currency code. If they give you a name, you may first need to first look up the currency code by its name."
prompt_override_configuration {
prompt_configurations {
# base_prompt_template = file("${path.module}/prompt_templates/pre_processing.txt")
parser_mode = "DEFAULT"
prompt_creation_mode = "DEFAULT"
# prompt_state = "DISABLED"
prompt_type = "PRE_PROCESSING"
# inference_configuration {
# max_length = 2048
# stop_sequences = [
# "\n\nHuman:"
# ]
# temperature = 0
# top_k = 250
# top_p = 1
# }
}
prompt_configurations {
base_prompt_template = file("${path.module}/prompt_templates/orchestration.txt")
parser_mode = "DEFAULT"
prompt_creation_mode = "OVERRIDDEN"
prompt_state = "ENABLED"
prompt_type = "ORCHESTRATION"
inference_configuration {
max_length = 2048
stop_sequences = [
"$invoke$",
"$answer$",
"$error$"
]
temperature = 0
top_k = 250
top_p = 1
}
}
prompt_configurations {
# base_prompt_template = file("${path.module}/prompt_templates/kb_resp_gen.txt")
parser_mode = "DEFAULT"
prompt_creation_mode = "DEFAULT"
# prompt_state = "DISABLED"
prompt_type = "KNOWLEDGE_BASE_RESPONSE_GENERATION"
# inference_configuration {
# max_length = 2048
# stop_sequences = [
# "\n\nHuman:"
# ]
# temperature = 0
# top_k = 250
# top_p = 1
# }
}
prompt_configurations {
# base_prompt_template = file("${path.module}/prompt_templates/post_processing.txt")
parser_mode = "DEFAULT"
prompt_creation_mode = "DEFAULT"
# prompt_state = "DISABLED"
prompt_type = "POST_PROCESSING"
# inference_configuration {
# max_length = 2048
# stop_sequences = [
# "$invoke$",
# "$answer$",
# "$error$"
# ]
# temperature = 0
# top_k = 250
# top_p = 1
# }
}
}
}
I then get the inconsistent state error because the state is returning all attributes for the prompt_configurations
:
aws_bedrockagent_agent.forex_asst: Creating...
╷
│ Error: Provider produced inconsistent result after apply
│
│ When applying changes to aws_bedrockagent_agent.forex_asst, provider "provider[\"registry.terraform.io/hashicorp/aws\"]" produced an unexpected new value:
│ .prompt_override_configuration[0].prompt_configurations: planned set element
│ cty.ObjectVal(map[string]cty.Value{"base_prompt_template":cty.NullVal(cty.String),
│ "inference_configuration":cty.NullVal(cty.List(cty.Object(map[string]cty.Type{"max_length":cty.Number, "stop_sequences":cty.List(cty.String),
│ "temperature":cty.Number, "top_k":cty.Number, "top_p":cty.Number}))), "parser_mode":cty.StringVal("DEFAULT"),
│ "prompt_creation_mode":cty.StringVal("DEFAULT"), "prompt_state":cty.NullVal(cty.String), "prompt_type":cty.StringVal("KNOWLEDGE_BASE_RESPONSE_GENERATION")})
│ does not correlate with any element in actual.
│
│ This is a bug in the provider, which should be reported in the provider's own issue tracker.
╵
╷
│ Error: Provider produced inconsistent result after apply
│
│ When applying changes to aws_bedrockagent_agent.forex_asst, provider "provider[\"registry.terraform.io/hashicorp/aws\"]" produced an unexpected new value:
│ .prompt_override_configuration[0].prompt_configurations: planned set element
│ cty.ObjectVal(map[string]cty.Value{"base_prompt_template":cty.NullVal(cty.String),
│ "inference_configuration":cty.NullVal(cty.List(cty.Object(map[string]cty.Type{"max_length":cty.Number, "stop_sequences":cty.List(cty.String),
│ "temperature":cty.Number, "top_k":cty.Number, "top_p":cty.Number}))), "parser_mode":cty.StringVal("DEFAULT"),
│ "prompt_creation_mode":cty.StringVal("DEFAULT"), "prompt_state":cty.NullVal(cty.String), "prompt_type":cty.StringVal("POST_PROCESSING")}) does not correlate
│ with any element in actual.
│
│ This is a bug in the provider, which should be reported in the provider's own issue tracker.
╵
╷
│ Error: Provider produced inconsistent result after apply
│
│ When applying changes to aws_bedrockagent_agent.forex_asst, provider "provider[\"registry.terraform.io/hashicorp/aws\"]" produced an unexpected new value:
│ .prompt_override_configuration[0].prompt_configurations: planned set element
│ cty.ObjectVal(map[string]cty.Value{"base_prompt_template":cty.NullVal(cty.String),
│ "inference_configuration":cty.NullVal(cty.List(cty.Object(map[string]cty.Type{"max_length":cty.Number, "stop_sequences":cty.List(cty.String),
│ "temperature":cty.Number, "top_k":cty.Number, "top_p":cty.Number}))), "parser_mode":cty.StringVal("DEFAULT"),
│ "prompt_creation_mode":cty.StringVal("DEFAULT"), "prompt_state":cty.NullVal(cty.String), "prompt_type":cty.StringVal("PRE_PROCESSING")}) does not correlate
│ with any element in actual.
│
│ This is a bug in the provider, which should be reported in the provider's own issue tracker.
╵
I am getting the same thing with aws provider version 5.52.0
and terraform version 1.5.7
.
I am trying to create a relatively default agent but want to customize the KNOWLEDGE_BASE_GENERATION_RESPONSE
prompt_type
.
Here's what I am trying to configure:
PRE_PROCESSING
: disabled
ORCHESTRATION
: enabled but default everything
KNOWLEDGE_BASE_RESPONSE_GENERATION
: overridden
PRE_PROCESSING
: disabled
My code is very similar to @acwwat
resource "aws_bedrockagent_agent" "operator" {
for_each = var.operators
agent_name = "${local.name}-${each.key}"
agent_resource_role_arn = aws_iam_role.bedrock_agent.arn
foundation_model = "anthropic.claude-3-sonnet-20240229-v1:0"
idle_session_ttl_in_seconds = 600
instruction = file("${path.module}/prompt_templates/agent_instruction.txt")
prepare_agent = true
prompt_override_configuration {
prompt_configurations {
parser_mode = "DEFAULT"
prompt_creation_mode = "DEFAULT"
# prompt_state = "DISABLED"
prompt_type = "PRE_PROCESSING"
}
prompt_configurations {
base_prompt_template = file("${path.module}/prompt_templates/orchestration.txt")
parser_mode = "DEFAULT"
prompt_creation_mode = "OVERRIDDEN"
prompt_state = "ENABLED"
prompt_type = "ORCHESTRATION"
inference_configuration {
max_length = 2048
stop_sequences = [
"$invoke$",
"$answer$",
"$error$"
]
temperature = 0
top_k = 250
top_p = 1
}
}
prompt_configurations {
base_prompt_template = file("${path.module}/prompt_templates/knowledge_base_response_generation.txt")
parser_mode = "DEFAULT"
prompt_creation_mode = "OVERRIDDEN"
prompt_state = "ENABLED"
prompt_type = "KNOWLEDGE_BASE_RESPONSE_GENERATION"
inference_configuration {
max_length = 2048
stop_sequences = [
"\n\nHuman:"
]
temperature = 0.01
top_k = 250
top_p = 0.8
}
}
prompt_configurations {
parser_mode = "DEFAULT"
prompt_creation_mode = "DEFAULT"
# prompt_state = "DISABLED"
prompt_type = "POST_PROCESSING"
}
}
tags = merge(local.common_tags, tomap({ Name = "${local.name}-${each.key}" }))
}
resource "aws_bedrockagent_agent_knowledge_base_association" "operator" {
for_each = var.operators
agent_id = aws_bedrockagent_agent.operator[each.key].id
description = "${local.name}-${each.key}"
knowledge_base_id = aws_bedrockagent_knowledge_base.operators[each.key].id
knowledge_base_state = "ENABLED"
}
resource "aws_bedrockagent_agent_alias" "operator" {
for_each = var.operators
agent_alias_name = "${local.name}-${each.key}"
agent_id = aws_bedrockagent_agent.operator[each.key].agent_id
description = each.key
}
The agents are created but the version/alias is not created. I suspect because of the inconsistent result after apply
.
Terraform Core Version
1.6.6
AWS Provider Version
1.47.0
Affected Resource(s)
aws_bedrockagent_agent
Expected Behavior
The resource is created or updated successfully.
Actual Behavior
The resource fails to create or update due to the validation error below.
Relevant Error/Panic Output Snippet
Terraform Configuration Files
You'll also need to place this file in a
prompt_templates
folder in the same location as the Terraform configuration.orchestration.txt
Steps to Reproduce
Debug Output
No response
Panic Output
No response
Important Factoids
My goal is to customize only one of the four prompt configurations, since they are very verbose and would be hard to repeat in Terraform. Not sure if it is possible, but it would be great if the resource can use the state for the blocks that are not specified to for consistency.
References
No response
Would you like to implement a fix?
None