aiqinxuancai / ChatGPTSharp

This project implements ChatGPT continuous dialogue based on ConversationId, which can be quickly integrated with just a few lines of code. It supports models such as gpt-4, gpt-3.5-turbo, and text-davinci-003.
MIT License
28 stars 4 forks source link

How to use chat api instead of completions? #6

Closed lofti198 closed 1 month ago

lofti198 commented 1 month ago

Hi! I want to send several messages one by one and finally ask to make one general text from them (this is my approach to go around request token limitation for big models). So the final dialog should look like this (tried it in GPT web version and also here https://platform.openai.com/playground/chat?models=gpt-4o got the same result):

i will send you several messages and then I will ask you to create one general text from them
ChatGPT
Sure, go ahead and send the messages. I'll be ready to create a general text from them once you've sent everything.

User
Once upon a time, lived a frog
ChatGPT
Got it. Please continue with the next messages.

User
She had 5 children
ChatGPT
Okay, noted. Please send the next message when ready.

User
She was very happy

ChatGPT
Understood. Please send the next message.

User
Now join them all in one text
ChatGPT
Once upon a time, there lived a frog. She had 5 children and was very happy.

However, code below (using your client) seems to request completions api (instead of chat api), despite the fact that I handle conversation id. Is there any way to use chat api instead?

using ChatGPTSharp;
using ChatGPTSharp.Model;

namespace BigTextLLMProcessor.Services.AI
{
    public class BigTextLLMChunkProcessor
    {
        private readonly string _initialPrompt;
        private readonly string _separateRequestPrompt;
        private readonly string _finalPrompt;
        private readonly ChatGPTClient _chatGptClient;
        private readonly string _model;

        public BigTextLLMChunkProcessor(string initialPrompt, string separateRequestPrompt, string finalPrompt,
            ChatGPTClient chatGptClient)
        {
            _initialPrompt = initialPrompt;
            _separateRequestPrompt = separateRequestPrompt;
            _finalPrompt = finalPrompt;
            _chatGptClient = chatGptClient;
        }
        public async Task<string> Process(List<string> chunks)
        {

            ConversationResult convResult = await _chatGptClient.SendMessage("","",_initialPrompt);
            string systemMessage;
            for (int i = 0; i < chunks.Count; i++)
            {
                systemMessage = _separateRequestPrompt.Replace("%N%", (i+1).ToString());

                convResult = await _chatGptClient.SendMessage(chunks[i], convResult.ConversationId, systemMessage);
            }

            convResult = await _chatGptClient.SendMessage(_finalPrompt, convResult.ConversationId);

            return convResult.Response;
        }
    }
}
lofti198 commented 1 month ago

I mean it gives the same result like in this playground https://platform.openai.com/playground/complete . But I need this one https://platform.openai.com/playground/chat?models=gpt-4o

p.s. sorry if I missed smth

aiqinxuancai commented 1 month ago

I don't understand what you need. The method of continuous dialogue has been fairly clearly demonstrated in the example.

var msg1 = await client.SendMessage("i will send you several messages and then I will ask you to create one general text from them");

//Waiting and check result (msg1.Response)
var msg2 = await client.SendMessage("Once upon a time, lived a frog", msg1.ConversationId, msg1.MessageId);

//Waiting and check result (msg2.Response)
var msg3 = await client.SendMessage("She had 5 children", msg2.ConversationId, msg2.MessageId);

...
lofti198 commented 1 month ago

I dont know how it works under the hood, but it doesnt reproduce chat api behavior (which you can try through official playground). See the output for the first message: image

You can try to call this code:

var msg1 = await _chatGptClient.SendMessage("i will send you several messages and then I will ask you to create one general text from them");
Console.WriteLine(msg1.Response);

var msg2 = await _chatGptClient.SendMessage("Once upon a time, lived a frog", msg1.ConversationId, msg1.MessageId);
Console.WriteLine(msg2.Response);

var msg3 = await _chatGptClient.SendMessage("She had 5 children", msg2.ConversationId, msg2.MessageId);
Console.WriteLine(msg3.Response);
var msg4 = await _chatGptClient.SendMessage("She had 5 children", msg3.ConversationId, msg3.MessageId);
Console.WriteLine(msg4.Response);

var msg5 = await _chatGptClient.SendMessage("Now join them all in one text", msg4.ConversationId, msg4.MessageId);
Console.WriteLine(msg5.Response);

return msg5.Response;

Here is its output: image

Seems a lil bit different from conversation you get in the playground (https://platform.openai.com/playground/chat?models=gpt-4o ): image

but very similar to what you get if try completions playground https://platform.openai.com/playground/complete

lofti198 commented 1 month ago

!!!UPDATE!!! this behavior happened when I used gpt-3.5-turbo , however when switched to gpt-4o, seems working ok var client = ChatGPTClientFactory.CreateClient( apiKey: Environment.GetEnvironmentVariable("GPT_API_KEY"), modelName: "gpt-4o",// "gpt-3.5-turbo" isDebug: false // Enable debugging );

aiqinxuancai commented 1 month ago

For each conversation, ChatGPT provides a random response. You need to understand the basic principles of large models. If you need specific, program-identifiable data, define it in prompt without explanation and return the data in a fixed JSON format.

lofti198 commented 1 month ago

Thank you for recommendations. I tried smth like this, but with big texts (when chunk token length in overall barely exceeds 4k tokens) - it still summarizes just the last piece in the end (tried with gpt-4-turbo ).

I will send you text chunks in json messages: 

{  "chunk":"..." } 

After message with next chunk, just save it to your memory and reply with json:

{  "reply":"next" }

After all chunks I will send you the following json message: 

{ "command":"process" } 

After the "process" method - give me the overall summary of all the chunks, I have sent before. Return it as json:

{  "reply":"SUMMARY" }
lofti198 commented 1 month ago

Thank you for help! I found solution