NVIDIA / NeMo-Guardrails

NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.
Other
4.22k stars 402 forks source link

Exception: Prompt exceeds max length of 16000 characters even without history #754

Open ayoubazaouyat opened 2 months ago

ayoubazaouyat commented 2 months ago

python call : from nemoguardrails import LLMRails

rails = LLMRails(config)

messages=[{ "role": "user", "content": "what is an mbr ?" }]

options = {"output_vars": True}

output = rails.generate(messages=messages, options=options) print(output)

config.yml : models:

rails: output: flows:

prompts:


error : Error while execution 'self_check_facts' with parameters :{ ***} Prompt exceeds max length of 16000 characters even without history Traceback (most recent call last): File "c:\Users\Ayoub_Azaouyat\AppData\Local\Programs\Python\Python311\Lib\site-packages\nemoguardrails\actions\action_dispatcher.py", line 197, in execute_action result = await result ^^^^^^^^^^^^ File "C:\Users\Ayoub_Azaouyat\AppData\Local\Programs\Python\Python311\Lib\site-packages\nemoguardrails\library\self_check\facts\actions.py", line 45, in self_check_facts prompt = llm_task_manager.render_task_prompt( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "c:\Users\Ayoub_Azaouyat\AppData\Local\Programs\Python\Python311\Lib\site-packages\nemoguardrails\llm\taskmanager.py", line 231, in render_task_prompt raise Exception( Exception: Prompt exceeds max length of 16000 characters even without history

ayoubazaouyat commented 2 months ago

even after sitting the max_token to a lower limit it still give back the 12000 token limit, I even did change the default on the rails config lib, but it still shows the same error

Pouyanpi commented 2 months ago

Hi @ayoubazaouyat , would you try it out by using fact_checking instead of self_check_facts. So just change the task name. I will investigate it further if it was not resolved. Thanks!

ayoubazaouyat commented 2 months ago

Hi @Pouyanpi , I changed the following :


config.yml :

models:

rails: output: flows:


prompt.yml: prompts:


my output.co looks as follows : define user ask questions "Can I connect to scales ?" "Can I automate decisions?"

define flow answer pas-x question user ... $answer = execute rag() $check_facts = True bot $answer


error :


ValueError Traceback (most recent call last) Cell In[11], line 1 ----> 1 rails = LLMRails(config) 3 messages=[{ 4 "role": "user", "content": "what is an mbr ?" 5 }] 7 options = {"output_vars": True}

File c:\Users\Ayoub_Azaouyat\AppData\Local\Programs\Python\Python311\Lib\site-packages\nemoguardrails\rails\llm\llmrails.py:211, in LLMRails.init(self, config, llm, verbose) 208 break 210 # We run some additional checks on the config --> 211 self._validate_config() 213 # Next, we initialize the LLM engines (main engine and action engines if specified). 214 self._init_llms()

File c:\Users\Ayoub_Azaouyat\AppData\Local\Programs\Python\Python311\Lib\site-packages\nemoguardrails\rails\llm\llmrails.py:272, in LLMRails._validate_config(self) 270 for flow_name in self.config.rails.output.flows: 271 if flow_name not in existing_flows_names: --> 272 raise ValueError( 273 f"The provided output rail flow {flow_name} does not exist" 274 ) 276 for flow_name in self.config.rails.retrieval.flows: 277 if flow_name not in existing_flows_names:

ValueError: The provided output rail flow fact checking does not exist


what I want to do is use the default fact-checker or do I have to use something else, how can I fix it ? thanks

Pouyanpi commented 2 months ago

@ayoubazaouyat, I meant just to change the prompt's task:

task: fact_checking
  max_length: 120000
  content: |-
  You are given a task to identify if the hypothesis is grounded and entailed to the evidence.
  You will only use the contents of the evidence and not rely on external knowledge.
  Answer with yes/no. "evidence": {{ evidence }} "hypothesis": {{ response }} "entails":

keep the rest as it was.

rails:
  output:
    flows:
      - self check facts

would you please check it with this config?

ayoubazaouyat commented 2 months ago

@Pouyanpi , i did try that, it give this error back :

{ "name": "ValidationError", "message": "1 validation error for RailsConfig Value error, You must provide a self_check_facts prompt template. [type=value_error, input_value={'user_messages': {'ask':...onfig_path': './config'}, input_type=dict] For further information visit https://errors.pydantic.dev/2.7/v/value_error", "stack": "--------------------------------------------------------------------------- ValidationError Traceback (most recent call last) Cell In[11], line 3 1 from nemoguardrails import RailsConfig ----> 3 config = RailsConfig.from_path(\"./config\")

File ~/Downloads/Python/.venv/lib/python3.10/site-packages/nemoguardrails/rails/llm/config.py:856, in RailsConfig.from_path(config_path) 852 raw_config[\"instructions\"] = _default_config[\"instructions\"] 854 raw_config[\"config_path\"] = config_path --> 856 return RailsConfig.parse_object(raw_config)

File ~/Downloads/Python/.venv/lib/python3.10/site-packages/nemoguardrails/rails/llm/config.py:926, in RailsConfig.parse_object(cls, obj) 921 if flow_data.get(\"elements\") and not flow_data[\"elements\"][0].get( 922 \"_type\" 923 ): 924 flow_data[\"elements\"] = parse_flow_elements(flow_data[\"elements\"]) --> 926 return RailsConfig.parse_obj(obj)

File ~/Downloads/Python/.venv/lib/python3.10/site-packages/pydantic/main.py:1118, in BaseModel.parse_obj(cls, obj) 1112 @classmethod 1113 @typing_extensions.deprecated('The parse_obj method is deprecated; use model_validate instead.', category=None) 1114 def parse_obj(cls: type[Model], obj: Any) -> Model: # noqa: D102 1115 warnings.warn( 1116 'The parse_obj method is deprecated; use model_validate instead.', category=PydanticDeprecatedSince20 1117 ) -> 1118 return cls.model_validate(obj)

File ~/Downloads/Python/.venv/lib/python3.10/site-packages/pydantic/main.py:551, in BaseModel.model_validate(cls, obj, strict, from_attributes, context) 549 # __tracebackhide__ tells pytest and some other tools to omit this function from tracebacks 550 tracebackhide = True --> 551 return cls.__pydantic_validator__.validate_python( 552 obj, strict=strict, from_attributes=from_attributes, context=context 553 )

ValidationError: 1 validation error for RailsConfig Value error, You must provide a self_check_facts prompt template. [type=value_error, input_value={'user_messages': {'ask':...onfig_path': './config'}, input_type=dict] For further information visit https://errors.pydantic.dev/2.7/v/value_error" }

Pouyanpi commented 2 months ago

Thank you very much @ayoubazaouyat. I'll investigate it and update you shortly.

Pouyanpi commented 2 months ago

@ayoubazaouyat there seems to be a bug, thanks for reporting it.

As a workaround, before we fix it, you can use following in your prompts.yml:

prompts:

  - task: self_check_facts
    max_length: 120000
    content: |-
      You are given a task to identify if the hypothesis is grounded and entailed to the evidence.
      You will only use the contents of the evidence and not rely on external knowledge.
      Answer with yes/no. "evidence": {{ evidence }} "hypothesis": {{ response }} "entails":

  - task: fact_checking
    max_length: 120000
    content: |-
      You are given a task to identify if the hypothesis is grounded and entailed to the evidence.
      You will only use the contents of the evidence and not rely on external knowledge.
      Answer with yes/no. "evidence": {{ evidence }} "hypothesis": {{ response }} "entails":

It worked on my end,I've used the head of develop branch. I hope it temporarily resolves your issue.