Closed lofti198 closed 1 month ago
I mean it gives the same result like in this playground https://platform.openai.com/playground/complete . But I need this one https://platform.openai.com/playground/chat?models=gpt-4o
p.s. sorry if I missed smth
I don't understand what you need. The method of continuous dialogue has been fairly clearly demonstrated in the example.
var msg1 = await client.SendMessage("i will send you several messages and then I will ask you to create one general text from them");
//Waiting and check result (msg1.Response)
var msg2 = await client.SendMessage("Once upon a time, lived a frog", msg1.ConversationId, msg1.MessageId);
//Waiting and check result (msg2.Response)
var msg3 = await client.SendMessage("She had 5 children", msg2.ConversationId, msg2.MessageId);
...
I dont know how it works under the hood, but it doesnt reproduce chat api behavior (which you can try through official playground). See the output for the first message:
You can try to call this code:
var msg1 = await _chatGptClient.SendMessage("i will send you several messages and then I will ask you to create one general text from them");
Console.WriteLine(msg1.Response);
var msg2 = await _chatGptClient.SendMessage("Once upon a time, lived a frog", msg1.ConversationId, msg1.MessageId);
Console.WriteLine(msg2.Response);
var msg3 = await _chatGptClient.SendMessage("She had 5 children", msg2.ConversationId, msg2.MessageId);
Console.WriteLine(msg3.Response);
var msg4 = await _chatGptClient.SendMessage("She had 5 children", msg3.ConversationId, msg3.MessageId);
Console.WriteLine(msg4.Response);
var msg5 = await _chatGptClient.SendMessage("Now join them all in one text", msg4.ConversationId, msg4.MessageId);
Console.WriteLine(msg5.Response);
return msg5.Response;
Here is its output:
Seems a lil bit different from conversation you get in the playground (https://platform.openai.com/playground/chat?models=gpt-4o ):
but very similar to what you get if try completions playground https://platform.openai.com/playground/complete
!!!UPDATE!!! this behavior happened when I used gpt-3.5-turbo , however when switched to gpt-4o, seems working ok var client = ChatGPTClientFactory.CreateClient( apiKey: Environment.GetEnvironmentVariable("GPT_API_KEY"), modelName: "gpt-4o",// "gpt-3.5-turbo" isDebug: false // Enable debugging );
For each conversation, ChatGPT provides a random response. You need to understand the basic principles of large models. If you need specific, program-identifiable data, define it in prompt without explanation and return the data in a fixed JSON format.
Thank you for recommendations. I tried smth like this, but with big texts (when chunk token length in overall barely exceeds 4k tokens) - it still summarizes just the last piece in the end (tried with gpt-4-turbo ).
I will send you text chunks in json messages:
{ "chunk":"..." }
After message with next chunk, just save it to your memory and reply with json:
{ "reply":"next" }
After all chunks I will send you the following json message:
{ "command":"process" }
After the "process" method - give me the overall summary of all the chunks, I have sent before. Return it as json:
{ "reply":"SUMMARY" }
Thank you for help! I found solution
Hi! I want to send several messages one by one and finally ask to make one general text from them (this is my approach to go around request token limitation for big models). So the final dialog should look like this (tried it in GPT web version and also here https://platform.openai.com/playground/chat?models=gpt-4o got the same result):
However, code below (using your client) seems to request completions api (instead of chat api), despite the fact that I handle conversation id. Is there any way to use chat api instead?