Closed shrusti-ghela closed 9 months ago
Not in the library currently, we are tracking it in #38
@shrusti-ghela A quick workaround would be to replace the davinci model with OpenAI's suggested gpt-3.5-turbo-instruct. (See OpenAI Deprecations)
# Look for this code snippet
def call_ChatGPT(message, model_name="text-davinci-003", max_len=1024, temp=0.7, verbose=False):
# call GPT-3 API until result is provided and then return it
response = None
received = False
num_rate_errors = 0
while not received:
# ...
# And replace it with this workaround:
def call_ChatGPT(message, model_name="gpt-3.5-turbo-instruct", max_len=1024, temp=0.7, verbose=False):
# ...
Assuming you're using a virutal environment you will find this piece of code in venv/lib/python3.9/site-packages/factscore/openai_lm.py
. Note: Normally you shouldn't change the library code in your personal installation, because it will be overwritten when a new version is released, but just to get it working for the moment, this might be fine.
Is there a way to bypass this or change the model from my end to run factscore using OpenAI models?