NVIDIA / NeMo-Guardrails

NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.
Other
4.02k stars 367 forks source link

[BUG] LLMGenerationActions.generate_value() fails if the value is not a number #549

Open uvnikgupta opened 3 months ago

uvnikgupta commented 3 months ago

The return statement of generate_value() is return literal_eval(value) raises an exception if the value is other than a number. Modifying it to return value resolves the issue. Is this an expected behavior? If yes, can someone please tell how to extract string values from the user utterances.

I am using the following colang flow:

# Extract the FILE NAME from the user's request. If the FILE NAME is not specified just say "unknown" without giving any explanation
$file_name = ...

and my prompt content is :

content: |-
      """
      {{ general_instructions }}
      """

      # This is how a conversation between a user and the bot can go:
      {{ sample_conversation }}

      # This is how the bot thinks:
      {{ examples }}

      # This is the current conversation between the user and the bot:
      {{ sample_conversation | first_turns(2) }}
      {{ history | colang }}
      # {{ instructions }}
      ${{ var_name }} =
drazvan commented 3 months ago

@uvnikgupta: what LLM are you using? can you try to also instruct the LLM to wrap the value in double quotes?

uvnikgupta commented 3 months ago

@uvnikgupta: what LLM are you using? can you try to also instruct the LLM to wrap the value in double quotes?

I am using llama3 and mixtral. Can try to "request" the LLM to add the double quotes but I am concerned that it may not always work. I feel the better resolution would be to remove the literal_eval. Is there any particular reason to keep it that way?

uvnikgupta commented 3 months ago

Asking the LLM to # Always enclose your response within "" works for llama3 and GPT 3.5 but is a hit or miss for mixtral

drazvan commented 3 months ago

The reason for keeping it is because like this the LLM can produce any primitive Python expression. It's useful to be able to allow the LLM to produce a list of strings for example. Or even a tuple. But it might make sense to enable a "fallback to string" if the literal_eval fails.