open-webui / pipelines

Pipelines: Versatile, UI-Agnostic OpenAI-Compatible Plugin Framework
MIT License
1.94k stars 579 forks source link

Pipeline : Detect builtin-prompts vs user messages #461

Closed aslanok closed 4 months ago

aslanok commented 4 months ago

In pipeline there are some built-in prompts for generating title of chat or autocompleting prompts etc. When I ask a question in pipeline, my pipe function is triggering for 3 times. I want to redirect my user prompts to agent with FastAPI and other built-in prompts to my local LLM endpoint. How can I seperate these messages ?

I put some prompts here. First invocation is my message which I entered to chat, invocation 2 and 3 are triggered at the same time.

How can I seperate user prompts vs built-in prompts

Invocation count : 1 {'stream': True, 'model': 'm_agent_pipeline', 'messages': [{'role': 'user', 'content': 'what is the weather in munich'}], 'user': {'name': 'test', 'id': '123123123', 'email': 'test@test.com', 'role': 'admin'}}

Invocation Count 2: {'stream': False, 'model': 'm_agent_pipeline', 'messages': [{'role': 'user', 'content': '### Task:\nGenerate a concise, 3-5 word title with an emoji summarizing the chat history.\n### Guidelines:\n- The title should clearly represent the main theme or subject of the conversation.\n- Use emojis that enhance understanding of the topic, but avoid quotation marks or special formatting.\n- Write the title in the chat\'s primary language; default to English if multilingual.\n- Prioritize accuracy over excessive creativity; keep it clear and simple.\n### Output:\nJSON format: { "title": "your concise title here" }\n### Examples:\n- { "title": "📉 Stock Market Trends" },\n- { "title": "🍪 Perfect Chocolate Chip Recipe" },\n- { "title": "Evolution of Music Streaming" },\n- { "title": "Remote Work Productivity Tips" },\n- { "title": "Artificial Intelligence in Healthcare" },\n- { "title": "🎮 Video Game Development Insights" }\n### Chat History:\n\nUSER: what is the weather in munich\nASSISTANT: Are you sure for learning weather?\n'}], 'user': {'name': 'test', 'id': '123123123', 'email': 'test@test.com', 'role': 'admin'}, 'max_tokens': 1000}

Invocation count : 3 {'stream': False, 'model': 'm_agent_pipeline', 'messages': [{'role': 'user', 'content': '### Task:\nGenerate 1-3 broad tags categorizing the main themes of the chat history, along with 1-3 more specific subtopic tags.\n\n### Guidelines:\n- Start with high-level domains (e.g. Science, Technology, Philosophy, Arts, Politics, Business, Health, Sports, Entertainment, Education)\n- Consider including relevant subfields/subdomains if they are strongly represented throughout the conversation\n- If content is too short (less than 3 messages) or too diverse, use only ["General"]\n- Use the chat\'s primary language; default to English if multilingual\n- Prioritize accuracy over specificity\n\n### Output:\nJSON format: { "tags": ["tag1", "tag2", "tag3"] }\n\n### Chat History:\n\nUSER: what is the weather in munich\nASSISTANT: Are you sure for learning weather?\n'}], 'user': {'name': 'test', 'id': '123123123', 'email': 'test@test.com', 'role': 'admin'}}

aslanok commented 4 months ago

Enter Settings -> Interface -> External Models Select an External LLM model for generating answers for 2nd and 3rd messages. Now pipeline won't trigger 2nd and 3rd messages in pipeline. They will run by external model. I saw solution here : #386

PlebeiusGaragicus commented 1 month ago

FYI - I use this inside my pipeline to check if it's being run for the purpose of title generation:

# ensure my inlet captures the metadata:
async def inlet(self, body: dict, user: Optional[dict] = None) -> dict:
        metadata = body.get("metadata", {})
        self.metadata = metadata
        return body
# inside my pipeline:
if self.metadata.get('task', None) == 'title_generation':