SaturnCassini / gpt4all_generative_agents

Generative Agents: Interactive Simulacra of Human Behavior using GPT4All free model which runs on CPU
https://saturnseries.com
Apache License 2.0
108 stars 9 forks source link

some errors occurred when do a command "run 90" #3

Open aronfan opened 1 year ago

aronfan commented 1 year ago

GNS FUNCTION: asdhfapsh8p9hfaiafdsi;ldfj as DEBUG 6 GPT4All PROMPT """ Task: We want to understand the state of an object that is being used by someone.

Let's think step by step. We want to know about bed's state. Step 1. Isabella Rodriguez is at/using the sleeping. Step 2. Describe the bed's state: bed is """ Output the response to the prompt above in json. The output should ONLY contain the phrase that should go in . Example output json: {"output": "being fixed"} Traceback (most recent call last): File "D:\AIwork\SaturnCassini\reverie\backend_server\reverie.py", line 468, in open_server rs.start_server(int_count) File "D:\AIwork\SaturnCassini\reverie\backend_server\reverie.py", line 379, in start_server next_tile, pronunciatio, description = persona.move( File "D:\AIwork\SaturnCassini\reverie\backend_server\persona\persona.py", line 222, in move plan = self.plan(maze, personas, new_day, retrieved) File "D:\AIwork\SaturnCassini\reverie\backend_server\persona\persona.py", line 148, in plan return plan(self, maze, personas, new_day, retrieved) File "D:\AIwork\SaturnCassini\reverie\backend_server\persona\cognitive_modules\plan.py", line 959, in plan _determine_action(persona, maze) File "D:\AIwork\SaturnCassini\reverie\backend_server\persona\cognitive_modules\plan.py", line 635, in _determine_action act_obj_desp = generate_act_obj_desc(act_game_object, act_desp, persona) File "D:\AIwork\SaturnCassini\reverie\backend_server\persona\cognitive_modules\plan.py", line 269, in generate_act_obj_desc return run_gpt_prompt_act_obj_desc(act_game_object, act_desp, persona)[0] TypeError: 'NoneType' object is not subscriptable

What shall I do to resolve this error?

Me1onMonster commented 9 months ago

I had the same problem. Have you solved it yet?

nyoma-diamond commented 9 months ago

I think I found the mistake that is causing this:, lines 1028-1042 of run_gpt_prompt.py are commented out. I'm unsure why this is the case, but uncommenting that code appears to fix things.

Thus the function __chat_func_validate should be

def __chat_func_validate(gpt_response, prompt=""): ############
    try: 
      gpt_response = __func_clean_up(gpt_response, prompt="")
    except: 
      return False
    return True 

  print ("asdhfapsh8p9hfaiafdsi;ldfj as DEBUG 6") ########
  gpt_param = {"engine": "text-davinci-002", "max_tokens": 15, 
               "temperature": 0, "top_p": 1, "stream": False,
               "frequency_penalty": 0, "presence_penalty": 0, "stop": None}
  prompt_template = "persona/prompt_template/v3_ChatGPT/generate_obj_event_v1.txt" ########
  prompt_input = create_prompt_input(act_game_object, act_desp, persona)  ########
  prompt = generate_prompt(prompt_input, prompt_template)
  example_output = "being fixed" ########
  special_instruction = "The output should ONLY contain the phrase that should go in <fill in>." ########
  fail_safe = get_fail_safe(act_game_object) ########
  output = ChatGPT_safe_generate_response(prompt, example_output, special_instruction, 3, fail_safe,
                                          __chat_func_validate, __chat_func_clean_up, True)
  if output != False: 
    return output, [output, prompt, gpt_param, prompt_input, fail_safe]
  # ChatGPT Plugin ===========================================================

 # !!! The code below was commented out for some reason
  gpt_param = {"engine": "text-davinci-003", "max_tokens": 30,
               "temperature": 0, "top_p": 1, "stream": False,
               "frequency_penalty": 0, "presence_penalty": 0, "stop": ["\n"]}
  prompt_template = "persona/prompt_template/v2/generate_obj_event_v1.txt"
  prompt_input = create_prompt_input(act_game_object, act_desp, persona)
  prompt = generate_prompt(prompt_input, prompt_template)
  fail_safe = get_fail_safe(act_game_object)
  output = safe_generate_response(prompt, gpt_param, 5, fail_safe,
                                   __func_validate, __func_clean_up)

  if debug or verbose:
    print_run_prompts(prompt_template, persona, gpt_param,
                      prompt_input, prompt, output)

  return output, [output, prompt, gpt_param, prompt_input, fail_safe]

It's also commented out in the original codebase, so I'm unsure if this problem is a result of using GPT4All instead of the openAI API or not.