Closed mtrefilek closed 7 months ago
I am running into this issue as well. The official Titan documentation says that stopSequences
is an array of string:
{
"inputText": string,
"textGenerationConfig": {
"temperature": float,
"topP": float,
"maxTokenCount": int,
"stopSequences": [string]
}
}
but then goes on to say that the stop sequences should be concatenated using the |
character:
Stop sequences (stopSequences) – Specify character sequences to indicate where the model should stop. Use the | (pipe) character to separate different sequences (maximum 20 characters).
which seems strange - if stopSequences
is an array of sequences, then there should be no need to concatenate the sequences with the |
character.
+1, same issue here. The only stopSequences
parameters that work are using either User:
or |
, which makes sense, given that the validation regex is ^(\|+|User:)$
. Adding more than 1 element to the array (e.g. 5) returns Malformed input request: expected maximum item count: 1, found: 5, please reformat your input and try again.
. So it seems like atm you can't use Human:
or similar as stop sequences.
Same issue here, this makes using CoT prompting methods or Langchain agents impossible to use with Bedrock because of what is likely a mistyped regular expression and confusing documentation. Please fix the validation to allow for stop sequences other than User:
or |
, list the actual regular expression used to validate within the documentation, and clarify whether the stop sequences should be an array of strings, or an array with a single string of |
separated words.
Thank you for providing your use cases and more details. I've reached out to the service team about the following issues.
As this is a service team issue (not unique to the Python SDK), I've created a tracking issue in our cross-SDK repository (https://github.com/aws/aws-sdk/issues/692). Please continue to discuss your use cases and other details in the new issue.
This issue is now closed. Comments on closed issues are hard for our team to see. If you need more assistance, please open a new issue that references this one.
Describe the bug
The AWS Bedrock Inference docs only specify the stopSequence parameter for titan models as an array of strings. https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters-titan-text.html
The following code:
Returns an error:
ValidationException: An error occurred (ValidationException) when calling the InvokeModel operation: Malformed input request: string [Human:] does not match pattern ^(\|+|User:)$, please reformat your input and try again.
This validation is not required in any of the documentation.
Expected Behavior
Should produce a LLM Response from the Bedrock Runtime
Current Behavior
Reproduction Steps
Run the following in a python environment:
Possible Solution
No response
Additional Information/Context
No response
SDK version used
1.34.21
Environment details (OS name and version, etc.)
Mac Sonoma M1 Chip