joonspk-research / generative_agents

Generative Agents: Interactive Simulacra of Human Behavior
Apache License 2.0
16.22k stars 2.08k forks source link

Suggest to use gpt-3.5-turbo instead of text-davinci-002/003 #35

Open yunzheng1112 opened 1 year ago

yunzheng1112 commented 1 year ago

According to open AI https://platform.openai.com/docs/models/gpt-3-5, they suggest use gpt-3.5-turbo over davinci models, and the text-davinci is marked as legacy .

Cdingram commented 1 year ago

Don’t have time to look more into it, but reverie/backend_server/persona/prompt_template/gpt_structure.py seems to have code that calls 3.5 and even 4 in it. Haven’t looked into where it may be used because search is difficult until the repo is indexed.

jarrellmark commented 1 year ago

It doesn't work very well but this will work:

In reverie/backend_server/persona/prompt_template/gpt_structure.py:

modify def safe_generate_response so this new function ChatGPT_safe_generate_response_2 is defined above it and also change the inner code in def safe_generate_response too.

def ChatGPT_safe_generate_response_2(prompt,
                                     repeat=3,
                                     fail_safe_response="error",
                                     func_validate=None,
                                     func_clean_up=None,
                                     verbose=False): 
  # prompt = 'GPT-3 Prompt:\n"""\n' + prompt + '\n"""\n'
  # prompt = '"""\n' + prompt + '\n"""\n'

  if verbose: 
    print ("CHAT GPT PROMPT")
    print (prompt)

  for i in range(repeat):

    try: 
      curr_gpt_response = ChatGPT_request(prompt).strip()
      print("curr_gpt_response")
      print("-0-0-0-0-0-0-")
      print(curr_gpt_response)
      print("-0-0-0-0-0-0-")
      if func_validate(curr_gpt_response, prompt=prompt): 
        return_value = func_clean_up(curr_gpt_response, prompt=prompt)
        print("return_value")
        print("-0-0-0-0-0-0-")
        print(return_value)
        print("-0-0-0-0-0-0-")
        return return_value
    except: 
      pass

  return fail_safe_response

def safe_generate_response(prompt, 
                           gpt_parameter,
                           repeat=5,
                           fail_safe_response="error",
                           func_validate=None,
                           func_clean_up=None,
                           verbose=False):
  print("[safe_generate_response]: ENTER")
  print("[safe_generate_response]: Calling")
  response = ChatGPT_safe_generate_response_2(prompt, repeat=repeat, fail_safe_response=fail_safe_response, func_validate=func_validate, func_clean_up=func_clean_up)
  return response

  # To get back to normal:
  # Comment above
  # Uncomment below

  if verbose: 
    print (prompt)

  for i in range(repeat): 
    curr_gpt_response = GPT_request(prompt, gpt_parameter)
    if func_validate(curr_gpt_response, prompt=prompt): 
      return_value = func_clean_up(curr_gpt_response, prompt=prompt)
      print("-0-0-0-0-0-0-")
      print(return_value)
      print("-0-0-0-0-0-0-")
      exit()
      return return_value
    if verbose: 
      print ("---- repeat count: ", i, curr_gpt_response)
      print (curr_gpt_response)
      print ("~~~~")
  return fail_safe_response

All the calls to the more expensive text-davinci-002/003 models are going through def safe_generate_response, so this will replace all those calls with calls to the much cheaper gpt-3.5-turbo.