The free codex models will be discontinued by openAI on 2023-03-23. It means that every user's inquiry will generate ops costs.
Proposed Solution
Add tracking of the OpenAI usage as the following columns of the table openai_response in db:
prompt_tokens, smallint
completion_tokens, smallint
model: text
is_success: bool
DoD
The table openai_response adjusted
The core logic stores the model response regardless of whether graph generation was successful or not to keep track of unsuccessful user's requests (no diagram generated) which we pay for to OpenAI
Example:
Prompt: "asdasd"
Response:
{"id":"chatcmpl-71vlYWvLGIlVNrhVWe63qFmxofsPT","object":"chat.completion","created":1680694736,"model":"xxxxx","usage":{"prompt_tokens":775,"completion_tokens":28,"total_tokens":803},"choices":[{"message":{"role":"assistant","content":"I'm sorry, I don't understand what you mean by \"asdasd\". Can you please provide more context or a specific request?"},"finish_reason":"stop","index":0}]}
Problem
The free codex models will be discontinued by openAI on 2023-03-23. It means that every user's inquiry will generate ops costs.
Proposed Solution
openai_response
in db:prompt_tokens
, smallintcompletion_tokens
, smallintmodel
: textis_success
: boolDoD
openai_response
adjusted