Open AadarshBhalerao opened 1 month ago
Hey @AadarshBhalerao I think the docs that you are following are not updated You need to modify like this, replace "bot" with "assistant"
new_message = await app.generate_async(
messages=[{"role": "user", "content": "Explain to me life insurance"},
{"role": "assistant", "content": "Idiot, I cant tell that"}],
options={"rails": ["output"]}
)
print(f"Reply: {new_message}")
info = app.explain()
info.print_llm_calls_summary()
print("History: ", info.colang_history)
@drazvan probably I think this docs need updation as per this line in code it looks like this specifically requires role to be "assistant"
Thanks @kaushikabhishek87 This worked perfectly. I have another question, when we dont specify the kind of rail to run in options. In that case, all the llm calls are made
So, after self_check_input when the generate_user_intent is executed, is there a way to stop i.e. no futher execution of generate_bot_intent and generate_bot_response. (Please note: In this example I wont be adding output rail) I am planning to use this to check user prompts and if that is okay with the policies I have defined, the same prompt can be forwarded to other project specific GPT to get the response.
Let me know in case anything isnt clear. Thanks! :)
Hey @AadarshBhalerao I think the docs that you are following are not updated You need to modify like this, replace "bot" with "assistant"
new_message = await app.generate_async( messages=[{"role": "user", "content": "Explain to me life insurance"}, {"role": "assistant", "content": "Idiot, I cant tell that"}], options={"rails": ["output"]} ) print(f"Reply: {new_message}") info = app.explain() info.print_llm_calls_summary() print("History: ", info.colang_history)
@drazvan probably I think this docs need updation as per this line in code it looks like this specifically requires role to be "assistant"
Thanks, @kaushikabhishek87! We'll fix the example.
Thanks @kaushikabhishek87 This worked perfectly. I have another question, when we dont specify the kind of rail to run in options. In that case, all the llm calls are made
- self_check_input
- generate_user_intent
- generate_bot_intent
- generate_bot_response
- self_check_output
So, after self_check_input when the generate_user_intent is executed, is there a way to stop i.e. no futher execution of generate_bot_intent and generate_bot_response. (Please note: In this example I wont be adding output rail) I am planning to use this to check user prompts and if that is okay with the policies I have defined, the same prompt can be forwarded to other project specific GPT to get the response.
Let me know in case anything isnt clear. Thanks! :)
@AadarshBhalerao : so, are you trying to run just the input rails and the intent generation? and then do something based on the intent? One quick way I can think of is to add a flow like:
define bot ok
"ok"
define flow ok
user ...
bot ok
If the message passes the input rail, then the intent is generated, and the message "ok" is returned. To get the exact intent, you'll have to access the detailed log through the generation options. Let me know if this helps.
Hi i am trying to setup the output guard rail only as per the instructions mentioned in this link https://docs.nvidia.com/nemo/guardrails/user_guides/advanced/generation-options.html#output-rails-only
My function is:
Config.yml
flow.co
and prompts.yml
the error message recevied is