Traceback (most recent call last):
File "/home/rowbot/anaconda3/envs/ooga/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/rowbot/anaconda3/envs/ooga/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/mnt/2TB_SSD/Auto-GPT/autogpt/__main__.py", line 5, in <module>
autogpt.app.cli.main()
File "/home/rowbot/anaconda3/envs/ooga/lib/python3.10/site-packages/click/core.py", line 1157, in __call__
return self.main(*args, **kwargs)
File "/home/rowbot/anaconda3/envs/ooga/lib/python3.10/site-packages/click/core.py", line 1078, in main
rv = self.invoke(ctx)
File "/home/rowbot/anaconda3/envs/ooga/lib/python3.10/site-packages/click/core.py", line 1666, in invoke
rv = super().invoke(ctx)
File "/home/rowbot/anaconda3/envs/ooga/lib/python3.10/site-packages/click/core.py", line 1434, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/rowbot/anaconda3/envs/ooga/lib/python3.10/site-packages/click/core.py", line 783, in invoke
return __callback(*args, **kwargs)
File "/home/rowbot/anaconda3/envs/ooga/lib/python3.10/site-packages/click/decorators.py", line 33, in new_func
return f(get_current_context(), *args, **kwargs)
File "/mnt/2TB_SSD/Auto-GPT/autogpt/app/cli.py", line 121, in main
run_auto_gpt(
File "/mnt/2TB_SSD/Auto-GPT/autogpt/app/main.py", line 174, in run_auto_gpt
run_interaction_loop(agent)
File "/mnt/2TB_SSD/Auto-GPT/autogpt/app/main.py", line 255, in run_interaction_loop
command_name, command_args, assistant_reply_dict = agent.think()
File "/mnt/2TB_SSD/Auto-GPT/autogpt/agents/base.py", line 121, in think
return self.on_response(raw_response, thought_process_id, prompt, instruction)
File "/mnt/2TB_SSD/Auto-GPT/autogpt/agents/base.py", line 345, in on_response
"assistant", llm_response.content, "ai_response"
AttributeError: 'str' object has no attribute 'content'
I printed llm_response:
print("type:", type(llm_response), "content:", llm_response)
If I takeout the .content it just says Thinking \ (loading sign spinning) forever, and one the text-generation-webui side in the terminal it shows that it outputted a message:
Output generated in 54.56 seconds (3.15 tokens/s, 172 tokens, context 761, seed 1172004049)
but nothing happens
here are the flags I'm using for the server.py
--listen --model-menu --api --gpu-memory 12 --verbose --auto-devices --trust-remote-code (got some of these options from step 2 in the readme, I've tried with the setting.yaml template too)
Could I get help understanding what is going wrong? I know there are frequent text-generation-webui updates so some changes might have broken this plugin. Let me know if I need to provide additional information to help fix this.
Hi, I am getting the error:
I printed llm_response:
print("type:", type(llm_response), "content:", llm_response)
If I takeout the .content it just says Thinking \ (loading sign spinning) forever, and one the text-generation-webui side in the terminal it shows that it outputted a message:
Output generated in 54.56 seconds (3.15 tokens/s, 172 tokens, context 761, seed 1172004049)
but nothing happenshere are the flags I'm using for the server.py
--listen --model-menu --api --gpu-memory 12 --verbose --auto-devices --trust-remote-code
(got some of these options from step 2 in the readme, I've tried with the setting.yaml template too)Could I get help understanding what is going wrong? I know there are frequent text-generation-webui updates so some changes might have broken this plugin. Let me know if I need to provide additional information to help fix this.