Open karanravindra opened 11 months ago
you can check evaluate_email function and replace this
completion = client.chat.completions.create(
model="gpt-4", # switch to gpt-3.5-turbo for faster/ cheaper results (might be slightly less accurate)
messages=[system_message, user_message],
max_tokens=1,
temperature=0.0,
)
with your things, but completion should have same type and if you will run locally (сonsidering that giving away such personal data to OpenAI or anywhere else just seems strange.) maybe you need to think about batches and multiprocessing , etc ..
This would, in theory, work with non OpenAI Models as well right? It would be cool to use a model like LLAMA for it.