Open austinmw opened 9 months ago
Hi Austin,
I am sure Amazon Bedrock is of interest for many others.
Actually, as Bedrock already has an wrapper in Langchain it is simple to configure and use it from NeMo Guardrails by specifying the LLM engine and required parameters for Bedrock in config.yml
, e.g.
models:
- type: main
engine: amazon_bedrock
parameters:
credentials_profile_name: "bedrock-admin"
model_id: "amazon.titan-tg1-large"
This works for me, but probably you should do some experiments to assess which prompts are better suited for Bedrock - you can start with the general.yml
prompts (used for text-davinci-003
mainly) and then take a look at the prompts for the other providers here: https://github.com/NVIDIA/NeMo-Guardrails/tree/main/nemoguardrails/llm/prompts and in the examples for open models: https://github.com/NVIDIA/NeMo-Guardrails/tree/main/examples/llm
Maybe you can help with this and contribute with a MR? Once we have the general support for Bedrock in Guardrails, we can guide you integrate it in the hallucination rail or even perform some of the evaluation experiments to assess it's performance: https://github.com/NVIDIA/NeMo-Guardrails/blob/main/nemoguardrails/eval/README.md
Thanks, Traian
This has worked decently well for me:
anthropic.yml
:
prompts:
# GENERAL PROMPTS
- task: general
models:
- amazon_bedrock/anthropic
content: |-
{{ general_instructions.strip() }}
{{ history | user_assistant_sequence }}
Assistant:
output_parser: custom_general_parser
# Prompt for detecting the user message canonical form.
- task: generate_user_intent
models:
- amazon_bedrock/anthropic
content: |-
{{ general_instruction.strip() }}
This is how a conversation between a user and the bot can go:
<sample_conversation>
{{ sample_conversation.strip() }}
</sample_conversation>
This is how the user talks:
<examples>
{{ examples.strip() }}
</examples>
This is the current conversation between the user and the bot:
<current_conversation>
{{ sample_conversation | first_turns(2) }}
{{ history | colang }}
</current_conversation>
Complete the user intent and write nothing else.
Assistant: User intent:
output_parser: custom_intent_parser
# Prompt for generating the next steps.
- task: generate_next_steps
models:
- amazon_bedrock/anthropic
content: |-
"""
{{ general_instruction.strip() }}
"""
# This is how a conversation between a user and the bot can go:
{{ sample_conversation.strip() | remove_text_messages }}
# This is how the bot thinks:
{{ examples.strip() | remove_text_messages}}
# This is the current conversation between the user and the bot:
{{ sample_conversation.strip() | first_turns(2) | remove_text_messages}}
{{ history | colang | remove_text_messages}}
# Prompt for generating the bot message from a canonical form.
- task: generate_bot_message
models:
- amazon_bedrock/anthropic
content: |-
{{ general_instruction.strip() }}
Current date: {{ current_date }}
This is how a conversation between a user and the bot can go:
<sample_conversation>
{{ sample_conversation.strip() }}
</sample_conversation>
{% if relevant_chunks %}
This is some additional context:
```markdown
{{ relevant_chunks }}
{% endif %}
This is how the bot talks:
<examples>
{{ examples.strip() }}
</examples>
This is the current conversation between the user and the bot:
<current_conversation>
{{ sample_conversation.strip() | first_turns(2) }}
{{ history | colang }}
</current_conversation>
Use the sample conversation, examples, and current conversation to write a reply for the bot.
Make sure to pay close attention to the canonical form for what the bot should say (if applicable)!
Only write the reply for the bot, and nothing else. Do not write the canonical form.
Assistant:
output_parser: custom_general_parser
task: generate_value models:
amazon_bedrock/anthropic content: |- {{ general_instruction.strip() }}
Assistant: ${{ var_name }} =
task: fact_checking models:
amazon_bedrock/anthropic content: |-
Assistant:
output_parser: custom_fact_check_parser
task: jailbreak_check models:
amazon_bedrock/anthropic content: |- Analyze the following user input:
Assistant: The answer is:
task: output_moderation models:
amazon_bedrock/anthropic content: |-
Assistant: The answer is:
task: output_moderation_v2 models:
amazon_bedrock/anthropic content: |-
Here is an example:
{{ user_input }}
Does this response fall under one of the following
Answer with yes/no for each point.
Assistant:\n
task: check_hallucination models:
amazon_bedrock/anthropic content: |-
Assistant: The answer is:
Hi @austinmw, Can you help me with the general_instruction you used. I am getting the below error on trying to use bedrock anthropic models
ValueError: Error raised by bedrock service: An error occurred (ValidationException) when calling the InvokeModel operation: Invalid prompt: prompt must start with "
Human:
I'll let @austinmw confirm, but this doesn't seem related to the actual value of the general_instructions
. Maybe a serialization issue? i.e. expecting a string in JSON format (hence the prompt must start with \"
part)?
Hi @drazvan / @austinmw , I tried with altering the prompt for it to start with "Human:" (it is part of Anthropic's requirements) as below
prompts:
- task: general
models:
- amazon_bedrock/anthropic
content: |-
Human:
{{ history | user_assistant_sequence }}
Assistant:
The Formatted prompt using the above is
"prompt": "Human: \nUser: Hello there!\nAssistant:"
How do I remove this "User:" tag from coming in between. How do I customize this part?
@pushpendradahiya : you can implement a new human_assistant_sequence
filter that uses "Human" rather than "User".
You can put the following in your config.py
:
def human_assistant_sequence(events: List[dict]) -> str:
history_items = []
for event in events:
if event["type"] == "UserMessage":
history_items.append("Human: " + event["text"])
elif event["type"] == "StartUtteranceBotAction":
history_items.append("Assistant: " + event["script"])
return "\n".join(history_items)
def init(rails: LLMRails):
rails.register_filter(human_assistant_sequence)
And replace the prompt template with :
prompts:
- task: general
models:
- amazon_bedrock/anthropic
content: |-
{{ history | human_assistant_sequence }}
Assistant:
Note that I removed the first hard-coded "Human". Let me know if this works.
Hi, I'd like to submit a feature request to add support for Amazon Bedrock.
My particular interest is in using Bedrock as the LLM engine for the Hallucination Rail, but I'm sure many others would appreciate wider support.
Thanks!