NVIDIA / NeMo-Guardrails

NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.
Other
4.16k stars 391 forks source link

Custom `Action` block #172

Open shashi792 opened 1 year ago

shashi792 commented 1 year ago

I'm trying to work with a custom Action for a simple PII use case. All it does is just obfuscate the email and phone-numbers. Currently, when the intent is met (i.e. pii), the action block is triggered, but from there the the context's ('last_bot_message') is not working.

azure_llm = AzureChatOpenAI(
    deployment_name="test-gpt4-32k",
    model_name = "gpt4-32k",
)

@action()
async def masking_email_phone(
    context: Optional[dict] = None,
    llm: Optional[BaseLLM] = azure_llm,
):
    print(f"The context  = {context}")
    # user_message = azure_llm(messages= context.get('last_user_message'))
    print(f"The user message =  {user_message}")
    # Call your Model
    translated_message = utils.mask_pii(user_message.get('last_bot_message'))
    return translated_message

config = RailsConfig.from_path("/nemo_config/config_files")
app = LLMRails(config=config, llm=azure_llm, verbose=True)
app.register_action(masking_email_phone, name="masking_email_phone")
history = [{"role": "user", "content": "Contact details of IRS Finance department ?"}]
print(f"Checking = {history}")
result = app.generate(messages=history)
print(f"The results =  {result}")

.co file

define user request for pii
  "Provide me with the contact details such as phone number and email id ?"
  "customer care contact number"
  "provide email id"
  "What are the contact details of XYZ headquarters ?"
  "Contact details of XYZ department ?"

define flow
  user request for pii
  $obfuscatemsg = execute masking_email_phone
  bot $obfuscatemsg
baravit commented 1 year ago

Hi @shashi792, You mixed up the syntax a little.. Try the following:

user_message = context.get('last_user_message')
print(f"The user message =  {user_message}")

I don't know the interface with the azure_llm but user_message is a string contains only the content of the last user message.

shashi792 commented 1 year ago

@baravit t

 # print(f"The context  = {context.get('last_user_message')}")
    message_guardrail = [{"role": "user", "content":context.get('last_user_message')}]
    print(message_guardrail)
    user_message = await app.generate_async(messages= message_guardrail)
    print(f"The user message =  {user_message}")
    # Call your Model
    translated_message = utils.mask_pii(user_message.get('last_bot_message'))
    print(f"Ans from Action Block =  {translated_message}")
    return translated_message

The last_user_message needs to be passed through an LLM. But, the await app.generate_async goes into an infinite wait mode.

baravit commented 1 year ago

You are using it wrong. Where does your messages come from?

Try follow the following process: Assuming you have set up a Nemo server,

  1. Input - user_message, for which you need to generate bot response.
  2. Wrap it in the following format:
    messages = [
    {'role': 'user', content: user_msessage}
    ]
  3. send it to Nemo with
    bot_message = await app.generate_async(messages=messages)

As per the docs (https://github.com/NVIDIA/NeMo-Guardrails/blob/main/docs/user_guide/python-api.md), the context variable contain data that available to actions:

{ 'last_user_message': ...,  'last_bot_message': ..., 'retrieved_relevant_chunks': ... }

Therefore, meant to be use from inside an action and not to feed Nemo, that don't have the context for the action before actually sending the message. For example, take a look at the following snippet:

import logging
from dotenv import load_dotenv
load_dotenv()
from nemoguardrails import LLMRails, RailsConfig
logging.basicConfig(level=logging.INFO)

async def custom_action(context, events):
    last_user_message = context.get('last_user_message')
    print(f"The last user message is {last_user_message}") #should print "Let's test custom action"
    return "This is custom action"

COLANG_CONFIG = """
define user express greeting
    "Hello"
    "Hi"

define user test custom action
    "Let's test custom action"

define bot greeting and suggest help
    "Hi there, how can I help you?"

define flow greeting
    user express greeting
    bot greeting and suggest help

define flow test custom action
    user test custom action
    $answer = execute custom_action
    bot $answer

"""

YAML_CONFIG = """
models:
  - type: main
    engine: openai
    model: text-davinci-003
"""

config = RailsConfig.from_content(COLANG_CONFIG, YAML_CONFIG)
app = LLMRails(config)
app.register_action(custom_action, "custom_action")

def demo():

    history = [
        {"role": "user", "content": "Let's test custom action"},
    ]

    bot_message = app.generate(messages=history)

    print(f"\033[92m{bot_message['content']}\033[0m")

if __name__ == "__main__":
    demo()
shashi792 commented 1 year ago

@baravit , thanks for the quick reply. Sorry for the confusion. Continuting on your example:

 history = [
        {"role": "user", "content": "Provide generic email ids for sales and finance teams with domain mydomain.com "},
    ]
bot_message = app.generate(messages=history)
  1. Once this intent is triggered, it goes to defined action = "custom_action"
  2. In here, we would like to call the LLM to execute the question
  3. E.g.: LLM output
    Sure, here are some generic email ids for sales and finance teams with domain mydomain.com:
    Sales Team:
    sales@mydomain.com: This is a standard email address for general sales inquiries.
    salesinfo@mydomain.com: This address is similar to sales@mydomain.com and can be used for more specific sales-related inquiries.
  4. Pass the bot_message through an function mask_email which will mask the email ids from bot_message
    Sure, here are some generic email ids for sales and finance teams with domain mydomain.com:
    Sales Team:
    ***@mydomain.com: This is a standard email address for general sales inquiries.
    sa***nfo@mydomain.com: This address is similar to sales@mydomain.com and can be used for more specific sales-related inquiries.

This is similar in lines of this. : https://github.com/NVIDIA/NeMo-Guardrails/tree/main/examples/moderation_rail (see: How output-moderation works).

drazvan commented 11 months ago

Hi @shashi792 ! And thanks @baravit for jumping in to help.

We now have better support for output rails, which can alter the message. Check out: https://github.com/NVIDIA/NeMo-Guardrails/blob/develop/nemoguardrails/library/sensitive_data_detection/flows.co#L31 and https://github.com/NVIDIA/NeMo-Guardrails/blob/develop/docs/user_guides/guardrails-library.md#sensitive-data-detection

These should answer your question. Let me know if you need additional details.