YueFan1014 / VideoAgent

This is the official code of VideoAgent: A Memory-augmented Multimodal Agent for Video Understanding (ECCV 2024)
Apache License 2.0
97 stars 5 forks source link

Error running demo #1

Closed Marlod390 closed 3 months ago

Marlod390 commented 3 months ago

Dear authors,

thanks you for your great work. I encountered the following error when running the demo: How many boats are there in the video? Traceback (most recent call last): File "/mnt/qb/work/ponsmoll/pba178/.conda/videoagent/lib/python3.9/site-packages/gradio/queueing.py", line 501, in call_prediction output = await route_utils.call_process_api( File "/mnt/qb/work/ponsmoll/pba178/.conda/videoagent/lib/python3.9/site-packages/gradio/route_utils.py", line 258, in call_process_api output = await app.get_blocks().process_api( File "/mnt/qb/work/ponsmoll/pba178/.conda/videoagent/lib/python3.9/site-packages/gradio/blocks.py", line 1710, in process_api result = await self.call_function( File "/mnt/qb/work/ponsmoll/pba178/.conda/videoagent/lib/python3.9/site-packages/gradio/blocks.py", line 1250, in call_function prediction = await anyio.to_thread.run_sync( File "/mnt/qb/work/ponsmoll/pba178/.conda/videoagent/lib/python3.9/site-packages/anyio/to_thread.py", line 33, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "/mnt/qb/work/ponsmoll/pba178/.conda/videoagent/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread return await future File "/mnt/qb/work/ponsmoll/pba178/.conda/videoagent/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 807, in run result = context.run(func, *args) File "/mnt/qb/work/ponsmoll/pba178/.conda/videoagent/lib/python3.9/site-packages/gradio/utils.py", line 693, in wrapper response = f(*args, **kwargs) File "/mnt/qb/work/ponsmoll/pba178/project/VideoAgent/demo.py", line 20, in ask_question answer, log = ReActAgent(video_path=video_file, question=question, base_dir=base_dir, vqa_tool=vqa_tool, use_reid=use_reid, openai_api_key=openai_api_key) File "/mnt/qb/work/ponsmoll/pba178/project/VideoAgent/main.py", line 104, in ReActAgent agent_executor.invoke({"input": question}) File "/mnt/qb/work/ponsmoll/pba178/.conda/videoagent/lib/python3.9/site-packages/langchain/chains/base.py", line 162, in invoke raise e File "/mnt/qb/work/ponsmoll/pba178/.conda/videoagent/lib/python3.9/site-packages/langchain/chains/base.py", line 156, in invoke self._call(inputs, run_manager=run_manager) File "/mnt/qb/work/ponsmoll/pba178/.conda/videoagent/lib/python3.9/site-packages/langchain/agents/agent.py", line 1371, in _call next_step_output = self._take_next_step( File "/mnt/qb/work/ponsmoll/pba178/.conda/videoagent/lib/python3.9/site-packages/langchain/agents/agent.py", line 1097, in _take_next_step [ File "/mnt/qb/work/ponsmoll/pba178/.conda/videoagent/lib/python3.9/site-packages/langchain/agents/agent.py", line 1097, in <listcomp> [ File "/mnt/qb/work/ponsmoll/pba178/.conda/videoagent/lib/python3.9/site-packages/langchain/agents/agent.py", line 1125, in _iter_next_step output = self.agent.plan( File "/mnt/qb/work/ponsmoll/pba178/.conda/videoagent/lib/python3.9/site-packages/langchain/agents/agent.py", line 387, in plan for chunk in self.runnable.stream(inputs, config={"callbacks": callbacks}): File "/mnt/qb/work/ponsmoll/pba178/.conda/videoagent/lib/python3.9/site-packages/langchain_core/runnables/base.py", line 2424, in stream yield from self.transform(iter([input]), config, **kwargs) File "/mnt/qb/work/ponsmoll/pba178/.conda/videoagent/lib/python3.9/site-packages/langchain_core/runnables/base.py", line 2411, in transform yield from self._transform_stream_with_config( File "/mnt/qb/work/ponsmoll/pba178/.conda/videoagent/lib/python3.9/site-packages/langchain_core/runnables/base.py", line 1497, in _transform_stream_with_config chunk: Output = context.run(next, iterator) # type: ignore File "/mnt/qb/work/ponsmoll/pba178/.conda/videoagent/lib/python3.9/site-packages/langchain_core/runnables/base.py", line 2375, in _transform for output in final_pipeline: File "/mnt/qb/work/ponsmoll/pba178/.conda/videoagent/lib/python3.9/site-packages/langchain_core/runnables/base.py", line 1035, in transform for chunk in input: File "/mnt/qb/work/ponsmoll/pba178/.conda/videoagent/lib/python3.9/site-packages/langchain_core/runnables/base.py", line 3991, in transform yield from self.bound.transform( File "/mnt/qb/work/ponsmoll/pba178/.conda/videoagent/lib/python3.9/site-packages/langchain_core/runnables/base.py", line 1045, in transform yield from self.stream(final, config, **kwargs) File "/mnt/qb/work/ponsmoll/pba178/.conda/videoagent/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py", line 249, in stream raise e File "/mnt/qb/work/ponsmoll/pba178/.conda/videoagent/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py", line 233, in stream for chunk in self._stream( File "/mnt/qb/work/ponsmoll/pba178/.conda/videoagent/lib/python3.9/site-packages/langchain_openai/chat_models/base.py", line 398, in _stream for chunk in self.client.create(messages=message_dicts, **params): File "/mnt/qb/work/ponsmoll/pba178/.conda/videoagent/lib/python3.9/site-packages/openai/_utils/_utils.py", line 271, in wrapper return func(*args, **kwargs) File "/mnt/qb/work/ponsmoll/pba178/.conda/videoagent/lib/python3.9/site-packages/openai/resources/chat/completions.py", line 648, in create return self._post( File "/mnt/qb/work/ponsmoll/pba178/.conda/videoagent/lib/python3.9/site-packages/openai/_base_client.py", line 1179, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) File "/mnt/qb/work/ponsmoll/pba178/.conda/videoagent/lib/python3.9/site-packages/openai/_base_client.py", line 868, in request return self._request( File "/mnt/qb/work/ponsmoll/pba178/.conda/videoagent/lib/python3.9/site-packages/openai/_base_client.py", line 959, in _request raise self._make_status_error_from_response(err.response) from None openai.NotFoundError: Error code: 404 - {'error': {'message': 'The modelgpt-4does not exist or you do not have access to it.', 'type': 'invalid_request_error', 'param': None, 'code': 'model_not_found'}}

I have downloaded and placed the cache and tool_models in correct path. And I have entered the openai API in default.yaml.

YueFan1014 commented 3 months ago

Hi, thank you for your interest in our work. The problem you encounter seems to be an issue of OpenAI api-key. Please check here and here to see if they solve your problem.