The text completion api is useful because it's not censored like the chat completion apis. Also, sometimes pure text completion is simply useful.
My uncommitted implementation generates a chat simulation based on the db.listHistory() messages that I normally was feeding into openai.chatCompletions.
For example, I turn this:
{ role: 'user', content: 'Give me a random color.' }
{ role: 'assistant', content: 'Yellow.' }
{ role: 'user', content: 'Now tell me a random animal.' }
Into something like this:
User: Give me a random color.
Assistant: Yellow.
User: Now tell me a random animal.
Assistant:
It leaves a hanging "Assistant: " so the textCompletion api is solving text completion for something the assistant should say.
[x] /model should allow davinci selection
[x] I guess GPT4_ENABLED doesn't apply to davinci.
[ ] The davinci path should also consider token account (esp since the generated prompt string is longer than chatCompletion token) and also set max_tokens since I don't think the API does it for us
The text completion api is useful because it's not censored like the chat completion apis. Also, sometimes pure text completion is simply useful.
My uncommitted implementation generates a chat simulation based on the
db.listHistory()
messages that I normally was feeding intoopenai.chatCompletions
.For example, I turn this:
{ role: 'user', content: 'Give me a random color.' }
{ role: 'assistant', content: 'Yellow.' }
{ role: 'user', content: 'Now tell me a random animal.' }
Into something like this:
It leaves a hanging "Assistant: " so the textCompletion api is solving text completion for something the assistant should say.
/model
should allow davinci selection