SylphAI-Inc / AdalFlow

AdalFlow: The library to build & auto-optimize LLM applications.
http://adalflow.sylph.ai/
MIT License
2.22k stars 200 forks source link

Support passing a list of message to LLM model #131

Open zjffdu opened 4 months ago

zjffdu commented 4 months ago

Is your feature request related to a problem? Please describe. Currently I can only pass one string to Generator, but openai support a list of message which is what I want.

Describe the solution you'd like I'd like to support a list of messages like following:

messages = [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Who won the World Series in 2020?"},
    {"role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020."},
    {"role": "user", "content": "Who was the MVP?"},
]

# Call the ChatCompletion.create method with the messages
response = openai.ChatCompletion.create(
    model="gpt-4",  # Use "gpt-4" or "gpt-3.5-turbo" depending on your model availability
    messages=messages
)

Describe alternatives you've considered A clear and concise description of any alternative solutions or features you've considered.

Additional context Add any other context or screenshots about the feature request here.

liyin2015 commented 4 months ago

@zjffdu Thanks for writing the issues. Why it has to be a list of messages? Why not form them into a single prompt and send it to them? This tutorial should explain how you can form it.

If it make things easy, we can consider to provide you a simple component to help you form it:

<SYS> You are a helpful assistant.</SYS>
User: Who won the World Series in 2020?
You: The Los Angeles Dodgers won the World Series in 2020.
User: Who was the MVP? 
You: 

This is how research papers form their prompt.

liyin2015 commented 4 months ago

Additionally, We can create a GeneratorMessages, which takes a list of chat turns as input instead of prompt_kwargs and prompt template. The extend on modelclient will be simple enough. If the community wants to write a proposal on this approach, this can be a great way to extend but without complicated the prompt-only approach.

mikeedjones commented 2 months ago

This does mean you don't use the trained User/assistant tokens (<|im_start|>user\n et cetera) we've seen reduced performance on a fair number of tasks (especially dialogue tasks) when formatting the input into a single message