joonspk-research / generative_agents

Generative Agents: Interactive Simulacra of Human Behavior
Apache License 2.0
17.47k stars 2.26k forks source link

Info regarding validation error in `run_gpt_prompt_task_decomp(...)` #79

Open kaben opened 1 year ago

kaben commented 1 year ago

@joonspk-research: your code comments indicate you're troubleshooting validation errors in run_gpt_prompt_task_decomp(...):

https://github.com/joonspk-research/generative_agents/blob/fe05a71d3e4ed7d10bf68aa4eda6dd995ec070f4/reverie/backend_server/persona/prompt_template/run_gpt_prompt.py#L364

https://github.com/joonspk-research/generative_agents/blob/fe05a71d3e4ed7d10bf68aa4eda6dd995ec070f4/reverie/backend_server/persona/prompt_template/run_gpt_prompt.py#L417

Here's an example GPT response that triggers the error:

            reviewing his research question and objectives. (duration in minutes: 10, minutes left: 170)
2) Klaus is conducting a literature review to gather relevant sources. (duration in minutes: 30, minutes left: 140)
3) Klaus is taking notes on key findings from the literature. (duration in minutes: 20, minutes left: 120)
4) Klaus is organizing his notes and creating an outline for his paper. (duration in minutes: 15, minutes left: 105)
5) Klaus is writing the introduction and background section of his paper. (duration in minutes: 30, minutes left: 75)
6) Klaus is taking a short break. (duration in minutes: 10, minutes left: 65)
7) Klaus is analyzing data and findings from his research. (duration in minutes: 40, minutes left: 25)
8) Klaus is writing the results and discussion section of his paper. (duration in minutes: 30, minutes left: -5)
Note: The total duration of the subtasks exceeds the available time.

The final line causes the error. Reason: the line is in the wrong format for the parsing code.

(The reason it's in the wrong format is that the LLM is trying to alert you to a separate error, which I think is interesting -- the LLM is trying to help you by telling you it made a mistake. You might be able to use this to your advantage by asking the LLM to report errors in a format you can easily parse and log in some way.)

babytdream commented 1 year ago

hello,Whenever run xxx ends, this error will appear. Have you encountered this problem?

-==- -==- -==- 
Traceback (most recent call last):
  File "/data/generative_agents/reverie/backend_server/reverie.py", line 471, in open_server
    rs.start_server(int_count)
  File "/data/generative_agents/reverie/backend_server/reverie.py", line 379, in start_server
    next_tile, pronunciatio, description = persona.move(
  File "/data/generative_agents/reverie/backend_server/persona/persona.py", line 222, in move
    plan = self.plan(maze, personas, new_day, retrieved)
  File "/data/generative_agents/reverie/backend_server/persona/persona.py", line 148, in plan
    return plan(self, maze, personas, new_day, retrieved)
  File "/data/generative_agents/reverie/backend_server/persona/cognitive_modules/plan.py", line 959, in plan
    _determine_action(persona, maze)
  File "/data/generative_agents/reverie/backend_server/persona/cognitive_modules/plan.py", line 573, in _determine_action
    generate_task_decomp(persona, act_desp, act_dura))
  File "/data/generative_agents/reverie/backend_server/persona/cognitive_modules/plan.py", line 164, in generate_task_decomp
    return run_gpt_prompt_task_decomp(persona, task, duration)[0]
  File "/data/generative_agents/reverie/backend_server/persona/prompt_template/run_gpt_prompt.py", line 439, in run_gpt_prompt_task_decomp
    output = safe_generate_response(prompt, gpt_param, 5, get_fail_safe(),
  File "/data/generative_agents/reverie/backend_server/persona/prompt_template/gpt_structure.py", line 262, in safe_generate_response
    return func_clean_up(curr_gpt_response, prompt=prompt)
  File "/data/generative_agents/reverie/backend_server/persona/prompt_template/run_gpt_prompt.py", line 378, in __func_clean_up
    duration = int(k[1].split(",")[0].strip())
IndexError: list index out of range
Error.
moulshri-7 commented 9 months ago

babytdream

Hi, I'm facing the same issue. Did you find a solution for the same?