Open SavioR0 opened 1 week ago
Hello, @SavioR0! I'm here to help you with any bugs, questions, or contribution inquiries you may have. Let me know how I can assist you!
To customize the default prompt in the attribute extraction node and ensure it respects the input variables and produces the desired output, you can use the prompt editor in the LLM node. Here are the steps and code snippets to help you achieve this:
Open the Prompt Editor: In the LLM node, navigate to the prompt editor where you can customize the SYSTEM/User/ASSISTANT sections.
Insert Variables: Use the variable insertion menu by typing "/" or "{" to insert special variable blocks or upstream node variables into the prompt.
Use Context Variables: Define and insert context variables to incorporate externally retrieved text content into the prompt.
Leverage Jinja-2 Templates: Utilize the Jinja-2 template language for more complex prompts, allowing for lightweight data transformation and logical processing.
Here is an example of how you might customize a prompt using these features:
SYSTEM: You are an intelligent assistant. Your task is to extract specific attributes from the provided text.
USER: {{ user_input }}
ASSISTANT: Based on the input, here are the extracted attributes:
{% for attribute in context_variables.attributes %}
- {{ attribute.name }}: {{ attribute.value }}
{% endfor %}
In this example:
{{ user_input }}
is an upstream node variable representing the user's input.context_variables.attributes
is a context variable containing the attributes to be extracted.Additionally, you can use the AdvancedPromptTransform
class and related methods from the parameter_extractor_node.py
to further customize and validate your prompts. For instance, you can generate function call prompts or prompt engineering prompts based on the model mode (e.g., CHAT or COMPLETION). Here is a snippet showing how to generate a function call prompt:
def _generate_function_call_prompt(self, node_data, query, variable_pool, model_config, memory):
query = FUNCTION_CALLING_EXTRACTOR_USER_TEMPLATE.format(content=query, structure=json.dumps(node_data.get_parameter_json_schema()))
prompt_transform = AdvancedPromptTransform(with_variable_tmpl=True)
rest_token = self._calculate_rest_token(node_data, query, variable_pool, model_config, '')
prompt_template = self._get_function_calling_prompt_template(node_data, query, variable_pool, memory, rest_token)
prompt_messages = prompt_transform.get_prompt(
prompt_template=prompt_template,
inputs={},
query='',
files=[],
context='',
memory_config=node_data.memory,
memory=None,
model_config=model_config
)
return prompt_messages, [tool]
Configuration Steps
Advanced Settings
Inference Mode
Some models support two inference modes, achieving parameter extraction through function/tool calls or pure prompt methods, with differences in instruction compliance. For instance, some models may perform better in prompt inference if function calling is less effective.
Memory
When memory is enabled, each input to the question classifier will include the chat history in the conversation to help the LLM understand the context and improve question comprehension during interactive dialogues.
Output Variables
__is_success Number
Extraction success status, with a value of 1 for success and 0 for failure.
__reason String
Extraction error reason
By following these steps and using the provided code snippets, you can customize the default prompt to ensure it respects the input variables and produces the desired output [1][2][3].
To continue talking to Dosu, mention @dosu.
Self Checks
1. Is this request related to a challenge you're experiencing? Tell me about your story.
Customizing the default promptWhen using the attribute extraction node, I encountered a challenge. I ran several tests with different prompts and realized that the results did not match what was expected. My need was to query a {{input}} and have GPT extract the necessary variables, respecting the data from that input. Frequently, it ignored this input variable and just reproduced the user's input. My hypothesis was that there was a "dirty prompt" that could not be changed using the graphical interface. This could explain the confusion of the AI, which, due to there being many instructions, could be getting confused and not respecting all the restrictions. The default prompt that is used in every parameter extractor node is shown below:
As a solution, I created an API that uses the official openai library, with a more personalized and concise prompt. This way I was able to obtain the desired result, in which I defined an output format that simulated the output variables of the variable extractor node.
This way, I was able to create an ideal prompt that met my needs. This confirmed that a non-customizable prompt is not always the best option, as in specific cases it may not adequately meet the user's demands. One suggestion would be to offer the user the opportunity to modify this default prompt.
My prompt looked like this:
Note: The user input will be in Portuguese (Brazil), which is why the examples are in that language.
2. Additional context or comments
No response
3. Can you help us with this feature?