Closed 646e62 closed 1 year ago
Good point, any idea how you'll remedy this?
The local summarization functions should be able to tackle some of this by reducing the amount of summary text that gets sent to GPT. Removing citations, paragraph numbers, and other extraneous characters (for text summarization, at least) should reduce the number of tokens sent to GPT, and therefore the price per call.
interesting ok
Looks like OpenAI just did some of the work for me. GPT-3.5 is now live in the API and only costs 10% of what the previous model did:
https://platform.openai.com/docs/guides/chat/instructing-chat-models
Noice!
Because GPT-3 isn't free, rules and custom models should be used when possible to limit how heavily the program has to rely on a paid service.