MrZilinXiao / openai-manager

Speed up your OpenAI requests by balancing prompts to multiple API keys.
51 stars 9 forks source link

How to use `ChatCompletion`? #2

Open hcffffff opened 1 year ago

hcffffff commented 1 year ago

Are there any examples of using openai_manager.ChatCompletion.create() function, cuz there seems to be a problem while using the create function.

In detail, the ChatCompletion use messages as prompt, and it conflicts with the code here.

So could you please show me an example of openai_manager.ChatCompletion usage? Thanks a lot.

MrZilinXiao commented 1 year ago

Hi, @hcffffff! I didn't expect someone to find this repository, and unit tests for this project are undergoing. Regarding your question, I will check the code and get back to you hours later. Sorry for the inconvenience brought.

MrZilinXiao commented 1 year ago

Hi @hcffffff! I have a hotfix for ChatCompletion here: https://github.com/MrZilinXiao/openai-manager/commit/58db2ee4c5daa24d40fd6a9cb3310a298554f5eb.

This is a minimal demo:

import openai_manager

print(openai_manager.ChatCompletion.create(model='gpt-3.5-turbo',
                                           messages=[
                                               [{"role": "user", "content": "Hello!"}],
                                               [{"role": "user", "content": "Hello!"}, {"role": "assistant", "content": "Hello there!"}, {
                                                   "role": "user", "content": "Who are you?"}]
                                           ]))

A reasonable output would be:

[{'id': 'chatcmpl-7035RQGB86hJSuqc0yICkegAMXIi0', 'object': 'chat.completion', 'created': 1680246221, 'model': 'gpt-3.5-turbo-0301', 'usage': {'prompt_tokens': 10, 'completion_tokens': 9, 'total_tokens': 19}, 'choices': [{'message': {'role': 'assistant', 'content': 'Hello! How can I assist you today?'}, 'finish_reason': 'stop', 'index': 0}]}, {'id': 'chatcmpl-7035Wyy6OSU3nzsDPhM5cWXl2mtgD', 'object': 'chat.completion', 'created': 1680246226, 'model': 'gpt-3.5-turbo-0301', 'usage': {'prompt_tokens': 27, 'completion_tokens': 31, 'total_tokens': 58}, 'choices': [{'message': {'role': 'assistant', 'content': 'I am an AI language model designed by OpenAI. You can ask me questions or ask for assistance with various tasks. How may I assist you today?'}, 'finish_reason': 'stop', 'index': 0}]}]

Note like Completion, messages should be a list of multiple conversations for maximum parallelization.

Hope this helps!

MrZilinXiao commented 1 year ago

Also, please pull from this repo to update, as I plan to refactor the codebase this weekend. And no upgrade will be made to PyPI until the refactoring is finished.

hcffffff commented 1 year ago

Thanks again, that really helps a lot!

And another question is, how to set arguments like logit_bias (or others in OpenAI API Reference) in a single ChatCompletion.create() request containing multiple messages.

MrZilinXiao commented 1 year ago

Wow, that's a problem I considered before. I assume you want to attach different model_kwargs to different requests, like temp=1 for the first request and temp=0 for others.

I would like to add it to my TODO list with a dedicated interface in openai_manager.ChatCompletion.create_with_different_kwargs(). Will let you know when it's done.

traosorus commented 1 year ago

Hello you can check my Betsy repository this will maybe help you

hcffffff commented 1 year ago

Thanks, and really appreciate your work!

Wow, that's a problem I considered before. I assume you want to attach different model_kwargs to different requests, like temp=1 for the first request and temp=0 for others.

I would like to add it to my TODO list with a dedicated interface in openai_manager.ChatCompletion.create_with_different_kwargs(). Will let you know when it's done.