Open austinmw opened 11 months ago
@austinmw: what type of troubles are you talking about? Are you getting the configuration to use the right prompts? or is it that the LLM does not respond correctly. If you can paste here the anthrophic.yml
file you've tried and a log from nemoguardrails chat --verbose --config=...
I can have a quick look.
Hi @drazvan mainly it was getting the LLM to respond correctly. I found that using xml tags for sections and in some cases beginning the Assistant completion helped improve the results for Claude models.
This is very much a work in progress, but seems to perform a lot better than general.yml
for Anthropic models in my testing:
prompts:
# GENERAL PROMPTS
- task: general
models:
- amazon_bedrock/anthropic
content: |-
{{ general_instructions.strip() }}
{{ history | user_assistant_sequence }}
Assistant:
output_parser: custom_general_parser
# Prompt for detecting the user message canonical form.
- task: generate_user_intent
models:
- amazon_bedrock/anthropic
content: |-
{{ general_instruction.strip() }}
This is how a conversation between a user and the bot can go:
<sample_conversation>
{{ sample_conversation.strip() }}
</sample_conversation>
This is how the user talks:
<examples>
{{ examples.strip() }}
</examples>
This is the current conversation between the user and the bot:
<current_conversation>
{{ sample_conversation | first_turns(2) }}
{{ history | colang }}
</current_conversation>
Complete the user intent and write nothing else.
Assistant: User intent:
output_parser: custom_intent_parser
# Prompt for generating the next steps.
- task: generate_next_steps
models:
- amazon_bedrock/anthropic
content: |-
"""
{{ general_instruction.strip() }}
"""
# This is how a conversation between a user and the bot can go:
{{ sample_conversation.strip() | remove_text_messages }}
# This is how the bot thinks:
{{ examples.strip() | remove_text_messages}}
# This is the current conversation between the user and the bot:
{{ sample_conversation.strip() | first_turns(2) | remove_text_messages}}
{{ history | colang | remove_text_messages}}
# Prompt for generating the bot message from a canonical form.
- task: generate_bot_message
models:
- amazon_bedrock/anthropic
content: |-
{{ general_instruction.strip() }}
Current date: {{ current_date }}
This is how a conversation between a user and the bot can go:
<sample_conversation>
{{ sample_conversation.strip() }}
</sample_conversation>
{% if relevant_chunks %}
This is some additional context:
```markdown
{{ relevant_chunks }}
{% endif %}
This is how the bot talks:
<examples>
{{ examples.strip() }}
</examples>
This is the current conversation between the user and the bot:
<current_conversation>
{{ sample_conversation.strip() | first_turns(2) }}
{{ history | colang }}
</current_conversation>
Use the sample conversation, examples, and current conversation to write a reply for the bot.
Make sure to pay close attention to the canonical form for what the bot should say (if applicable)!
Only write the reply for the bot, and nothing else. Do not write the canonical form.
Assistant:
output_parser: custom_general_parser
task: generate_value models:
amazon_bedrock/anthropic content: |- {{ general_instruction.strip() }}
Assistant: ${{ var_name }} =
task: fact_checking models:
amazon_bedrock/anthropic content: |-
Assistant:
output_parser: custom_fact_check_parser
task: jailbreak_check models:
amazon_bedrock/anthropic content: |- Analyze the following user input:
Assistant: The answer is:
task: output_moderation models:
amazon_bedrock/anthropic content: |-
Assistant: The answer is:
task: output_moderation_v2 models:
amazon_bedrock/anthropic content: |-
Here is an example:
{{ user_input }}
Does this response fall under one of the following
Assistant:\n
task: check_hallucination models:
amazon_bedrock/anthropic content: |-
Assistant: The answer is:
And I'm defining models in my yaml file like this:
models:
- type: main
engine: amazon_bedrock
model: anthropic
parameters:
model_id: anthropic.claude-instant-v1
#model_id: anthropic.claude-v2
model_kwargs:
max_tokens_to_sample: 4048
stop_sequences: ["\n\nHuman:", "\nUser:"]
# Set embedding_search_provider
core:
embedding_search_provider:
name: amazon_bedrock
parameters:
embedding_engine: amazon_bedrock
embedding_model: amazon.titan-embed-text-v1
# And for the knowledge base.
knowledge_base:
embedding_search_provider:
name: amazon_bedrock
parameters:
embedding_engine: amazon_bedrock
embedding_model: amazon.titan-embed-text-v1
Hey, did you find a solution? :)
I'm having a bit of trouble taking the
general.yml
and creating a successfulanthropic.yml
. Since Claude models are being used very frequently, would anyone be willing to add examples for these?