Eladlev / AutoPrompt

A framework for prompt tuning using Intent-based Prompt Calibration
Apache License 2.0
2.22k stars 193 forks source link

[Question] Help in defining my use case #65

Closed snassimr closed 4 months ago

snassimr commented 5 months ago

Hi @Eladlev ,

First of all - Thank for your work . I am eager to evaluate your method for my case.

I want to use some open source model for some task and to tune a prompt for this model . I would like to generate some texts and annotate them as "Good" or "Bad" . I want to use also GPT-4 to learn what makes text "Good" and "Bad" to tune the prompt above.

What is my case in your examples here : https://github.com/Eladlev/AutoPrompt/blob/main/docs/examples.md What model is llm and predictor llm for my case ?

Eladlev commented 5 months ago

Hi, So if I understand correctly you want to optimize an open-source model prompt according to GPT-4 annotation. This can easily be done by following these instructions: https://github.com/Eladlev/AutoPrompt/blob/main/docs/installation.md#configure-llm-annotator

And using HuggingFacePipeline as the predictor (with some open-source model llama-3 for example). This can be done by modifying the predictor LLM according to https://github.com/Eladlev/AutoPrompt/issues/40#issuecomment-2016365671

snassimr commented 5 months ago

Hi , @Eladlev

Thanks for your tips. After some-rethinking I need Argilla (human) annotator and HuggingFacePipeline predictor . I am still working on it , but I have a question : what is standalone "llm" section in config and what the role of this llm ?

image

Eladlev commented 5 months ago

This is the optimizer LLM (probably we should have put it under meta_prompts). This LLM will be used to generate the synthetic data, the new prompt suggestion and the error analysis. I'm suggesting using a strong LLM in this part of the configuration (even if you are using HuggingFacePipeline LLM as the predictor)

snassimr commented 4 months ago

Hi @Eladlev , My prompt looks a bit complex . It contains of two parts : one that should by optimized and second part is input to this prompt that shouldn't be tuned. Here the example :
prompt = """ Summarize text . Keep key events. # This part only is subject for tuning Text : {text_str} Summary: """ Let's assume that we are talking about given value of text_string. I'd like to present several summaries to user via Argilla and he/she will annotate it as "Good" or "Bad" Is it generation case and not classification ? Should text_str appear in "task_description" ?

Thanks

Eladlev commented 4 months ago

Hi, Your use case is very similar to this example: https://github.com/Eladlev/AutoPrompt/blob/main/docs/examples.md#generating-movie-reviews-generation-task

There also the prompt is an instruction prompt that is modified and there is a part of the user prompt which is given (the movie description).

If you have any questions regarding the adjustment for your specific use case we can also iterate on it on AutoPrompt Discord channel