chenfei-wu / TaskMatrix

Other
34.51k stars 3.32k forks source link

ValueError: Could not parse LLM output: `Yes` #324

Open JihadAKl opened 1 year ago

JihadAKl commented 1 year ago

I am loading these models : ImageCaptioning_cuda:0,ImageEditing_cuda:0,Text2Image_cuda:0

Entering new AgentExecutor chain... Traceback (most recent call last): File "/usr/local/lib/python3.9/dist-packages/gradio/routes.py", line 384, in run_predict output = await app.get_blocks().process_api( File "/usr/local/lib/python3.9/dist-packages/gradio/blocks.py", line 1032, in process_api result = await self.call_function( File "/usr/local/lib/python3.9/dist-packages/gradio/blocks.py", line 844, in call_function prediction = await anyio.to_thread.run_sync( File "/usr/local/lib/python3.9/dist-packages/anyio/to_thread.py", line 31, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "/usr/local/lib/python3.9/dist-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread return await future File "/usr/local/lib/python3.9/dist-packages/anyio/_backends/_asyncio.py", line 867, in run result = context.run(func, *args) File "/content/visual-chatgpt/visual_chatgpt.py", line 1015, in run_text res = self.agent({"input": text.strip()}) File "/usr/local/lib/python3.9/dist-packages/langchain/chains/base.py", line 168, in call raise e File "/usr/local/lib/python3.9/dist-packages/langchain/chains/base.py", line 165, in call outputs = self._call(inputs) File "/usr/local/lib/python3.9/dist-packages/langchain/agents/agent.py", line 503, in _call next_step_output = self._take_next_step( File "/usr/local/lib/python3.9/dist-packages/langchain/agents/agent.py", line 406, in _take_next_step output = self.agent.plan(intermediate_steps, **inputs) File "/usr/local/lib/python3.9/dist-packages/langchain/agents/agent.py", line 102, in plan action = self._get_next_action(full_inputs) File "/usr/local/lib/python3.9/dist-packages/langchain/agents/agent.py", line 64, in _get_next_action parsed_output = self._extract_tool_and_input(full_output) File "/usr/local/lib/python3.9/dist-packages/langchain/agents/conversational/base.py", line 84, in _extract_tool_and_input raise ValueError(f"Could not parse LLM output: {llm_output}") ValueError: Could not parse LLM output: Yes

prompt: take the couch from image/28496a09.png and put it in image/fe5e3d6f.png