Closed NasonZ closed 2 months ago
Could you try to change --output-dir
to something like results
?
Based on the log, ../results\\The_promise_and_technical_difficulties_of_SSTO_vehicles\\raw_search_results.json
looks incorrect. We set the default value of --output-dir
as ../results
in the scripts but this relative path is not friendly for Windows. (Thanks for providing the OS information!)
I've switched over to my linux machine but I'm now getting a slightly different error, there is no storm_gen_outline.txt
produced by the run_prewriting
script. direct_gen_outline.txt
also is not produced.
This leads to FileNotFoundError: [Errno 2] No such file or directory: '../results/The_promise_and_technical_difficulties_of_SSTO_vehicles/storm_gen_outline.txt'
when running python -m scripts.run_writing --input-source console --engine gpt-35-turbo --do-polish-article --remove-duplicate
To Reproduce my error:
(storm) me@me-MS:~/Prototypes/graphs/storm/src$ python -m scripts.run_prewriting --input-source console --engine gpt-35-turbo --max-conv-turn 5 --max-perspective 5 --do-research
Topic: The promise and technical difficulties of SSTO vehicles
Ground truth url (will be excluded from source):
engine : INFO : _research_topic executed in 84.4190 seconds
openai : INFO : error_code=None error_message='Invalid URL (POST /v1/engines/gpt-35-turbo-16k/chat/completions)' error_param=None error_type=invalid_request_error message='OpenAI API error received' stream_error=False
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "/home/me/Prototypes/graphs/storm/src/scripts/run_prewriting.py", line 96, in <module>
main(parser.parse_args())
File "/home/me/Prototypes/graphs/storm/src/scripts/run_prewriting.py", line 54, in main
runner.run(topic=topic,
File "/home/me/Prototypes/graphs/storm/src/engine.py", line 405, in run
outline = self._generate_outline(topic, conversations, callback_handler)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/me/Prototypes/graphs/storm/src/engine.py", line 26, in wrapper
result = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/me/Prototypes/graphs/storm/src/engine.py", line 222, in _generate_outline
result = write_outline(topic=topic, dlg_history=sum(conversations, []), callback_handler=callback_handler)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/me/anaconda3/envs/storm/lib/python3.11/site-packages/dspy/primitives/program.py", line 29, in __call__
return self.forward(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/me/Prototypes/graphs/storm/src/modules/write_page.py", line 179, in forward
old_outline = clean_up_outline(self.draft_page_outline(topic=topic).outline)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/me/anaconda3/envs/storm/lib/python3.11/site-packages/dspy/predict/predict.py", line 60, in __call__
return self.forward(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^
File "/home/me/anaconda3/envs/storm/lib/python3.11/site-packages/dspy/predict/predict.py", line 87, in forward
x, C = dsp.generate(signature, **config)(x, stage=self.stage)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/me/anaconda3/envs/storm/lib/python3.11/site-packages/dsp/primitives/predict.py", line 78, in do_generate
completions: list[dict[str, Any]] = generator(prompt, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/me/Prototypes/graphs/storm/src/modules/utils.py", line 73, in __call__
response = self.request(prompt, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/me/anaconda3/envs/storm/lib/python3.11/site-packages/backoff/_sync.py", line 105, in retry
ret = target(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/me/anaconda3/envs/storm/lib/python3.11/site-packages/dsp/modules/gpt3.py", line 136, in request
return self.basic_request(prompt, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/me/anaconda3/envs/storm/lib/python3.11/site-packages/dsp/modules/gpt3.py", line 109, in basic_request
response = chat_request(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^
File "/home/me/anaconda3/envs/storm/lib/python3.11/site-packages/dsp/modules/gpt3.py", line 247, in chat_request
return _cached_gpt3_turbo_request_v2_wrapped(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/me/anaconda3/envs/storm/lib/python3.11/site-packages/dsp/modules/cache_utils.py", line 17, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/me/anaconda3/envs/storm/lib/python3.11/site-packages/dsp/modules/gpt3.py", line 221, in _cached_gpt3_turbo_request_v2_wrapped
return _cached_gpt3_turbo_request_v2(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/me/anaconda3/envs/storm/lib/python3.11/site-packages/joblib/memory.py", line 655, in __call__
return self._cached_call(args, kwargs)[0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/me/anaconda3/envs/storm/lib/python3.11/site-packages/joblib/memory.py", line 598, in _cached_call
out, metadata = self.call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/me/anaconda3/envs/storm/lib/python3.11/site-packages/joblib/memory.py", line 856, in call
output = self.func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/me/anaconda3/envs/storm/lib/python3.11/site-packages/dsp/modules/gpt3.py", line 216, in _cached_gpt3_turbo_request_v2
return cast(OpenAIObject, openai.ChatCompletion.create(**kwargs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/me/anaconda3/envs/storm/lib/python3.11/site-packages/openai/api_resources/chat_completion.py", line 25, in create
return super().create(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/me/anaconda3/envs/storm/lib/python3.11/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 155, in create
response, _, api_key = requestor.request(
^^^^^^^^^^^^^^^^^^
File "/home/me/anaconda3/envs/storm/lib/python3.11/site-packages/openai/api_requestor.py", line 299, in request
resp, got_stream = self._interpret_response(result, stream)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/me/anaconda3/envs/storm/lib/python3.11/site-packages/openai/api_requestor.py", line 710, in _interpret_response
self._interpret_response_line(
File "/home/me/anaconda3/envs/storm/lib/python3.11/site-packages/openai/api_requestor.py", line 775, in _interpret_response_line
raise self.handle_error_response(
openai.error.InvalidRequestError: Invalid URL (POST /v1/engines/gpt-35-turbo-16k/chat/completions)
(storm) me@me-MS-7C56:~/Prototypes/graphs/storm/src$ python -m scripts.run_writing --input-source console --engine gpt-35-turbo --do-polish-article --remove-duplicate
Topic: The promise and technical difficulties of SSTO vehicles
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "/home/me/Prototypes/graphs/storm/src/scripts/run_writing.py", line 94, in <module>
main(parser.parse_args())
File "/home/me/Prototypes/graphs/storm/src/scripts/run_writing.py", line 54, in main
runner.run(topic=topic,
File "/home/me/Prototypes/graphs/storm/src/engine.py", line 413, in run
outline = load_str(os.path.join(self.args.output_dir, self.article_dir_name, 'storm_gen_outline.txt'))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/me/Prototypes/graphs/storm/src/modules/utils.py", line 317, in load_str
with open(path, 'r') as f:
^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: '../results/The_promise_and_technical_difficulties_of_SSTO_vehicles/storm_gen_outline.txt'
(storm) me@me-MS-7C56:~/Prototypes/graphs/storm$ tree -L 2
.
├── assets
│ ├── overview.png
│ └── two_stages.jpg
├── eval
│ ├── citation_quality.py
│ ├── eval_article_quality.py
│ ├── eval_outline_quality.py
│ ├── eval_rubric_5.json
│ ├── evaluation_prometheus.py
│ ├── evaluation_trim_length.py
│ └── metrics.py
├── FreshWiki
│ ├── json
│ ├── topic_list.csv
│ ├── txt
│ └── wikipage_extractor.py
├── LICENSE
├── README.md
├── requirements.txt
├── results
│ └── The_promise_and_technical_difficulties_of_SSTO_vehicles
│ ├── conversation_log.json
│ └── raw_search_results.json
├── secrets.toml
└── src
├── assertion.log
├── azure_openai_usage.log
├── engine.py
├── modules
├── openai_usage.log
├── __pycache__
└── scripts
Environment:
Python version: 3.11 Operating System: ubuntu 22.04
@NasonZ Could you try another api endpoint listed here: https://platform.openai.com/docs/models/gpt-3-5-turbo?
Here's the pointer to change the engine. https://github.com/stanford-oval/storm/blob/abf3dacadb1cd2aa942c1ae8dc83f5110c8ca5c8/src/scripts/run_prewriting.py#L29
Suspect the issue comes from failure of calling /v1/engines/gpt-35-turbo-16k/chat/completions
at earlier stage.
@Yucheng-Jiang
I have adjusted the endpoint but I'm still getting the error.
storm/src$ python -m scripts.run_prewriting --input-source console --engine gpt-35-turbo --max-conv-turn 5 --max-perspective 5
--do-research
Topic: The promise and technical difficulties of SSTO vehicles
Ground truth url (will be excluded from source):
...
engine : INFO : _research_topic executed in 15.5520 seconds
openai : INFO : error_code=None error_message='Invalid URL (POST /v1/engines/gpt-3.5-turbo-0125/chat/completions)' error_param=None error_type=invalid_request_error message='OpenAI API error received' stream_error=False
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "/home/me/Prototypes/graphs/storm/src/scripts/run_prewriting.py", line 96, in <module>
main(parser.parse_args())
File "/home/me/Prototypes/graphs/storm/src/scripts/run_prewriting.py", line 54, in main
runner.run(topic=topic,
File "/home/me/Prototypes/graphs/storm/src/engine.py", line 405, in run
outline = self._generate_outline(topic, conversations, callback_handler)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/me/Prototypes/graphs/storm/src/engine.py", line 26, in wrapper
result = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/me/Prototypes/graphs/storm/src/engine.py", line 222, in _generate_outline
result = write_outline(topic=topic, dlg_history=sum(conversations, []), callback_handler=callback_handler)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/me/anaconda3/envs/storm/lib/python3.11/site-packages/dspy/primitives/program.py", line 29, in __call__
return self.forward(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/me/Prototypes/graphs/storm/src/modules/write_page.py", line 179, in forward
old_outline = clean_up_outline(self.draft_page_outline(topic=topic).outline)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/me/anaconda3/envs/storm/lib/python3.11/site-packages/dspy/predict/predict.py", line 60, in __call__
return self.forward(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^
File "/home/me/anaconda3/envs/storm/lib/python3.11/site-packages/dspy/predict/predict.py", line 87, in forward
x, C = dsp.generate(signature, **config)(x, stage=self.stage)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/me/anaconda3/envs/storm/lib/python3.11/site-packages/dsp/primitives/predict.py", line 78, in do_generate
completions: list[dict[str, Any]] = generator(prompt, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/me/Prototypes/graphs/storm/src/modules/utils.py", line 73, in __call__
response = self.request(prompt, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/me/anaconda3/envs/storm/lib/python3.11/site-packages/backoff/_sync.py", line 105, in retry
ret = target(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/me/anaconda3/envs/storm/lib/python3.11/site-packages/dsp/modules/gpt3.py", line 136, in request
return self.basic_request(prompt, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/me/anaconda3/envs/storm/lib/python3.11/site-packages/dsp/modules/gpt3.py", line 109, in basic_request
response = chat_request(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^
File "/home/me/anaconda3/envs/storm/lib/python3.11/site-packages/dsp/modules/gpt3.py", line 247, in chat_request
return _cached_gpt3_turbo_request_v2_wrapped(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/me/anaconda3/envs/storm/lib/python3.11/site-packages/dsp/modules/cache_utils.py", line 17, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/me/anaconda3/envs/storm/lib/python3.11/site-packages/dsp/modules/gpt3.py", line 221, in _cached_gpt3_turbo_request_v2_wrapped
return _cached_gpt3_turbo_request_v2(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/me/anaconda3/envs/storm/lib/python3.11/site-packages/joblib/memory.py", line 655, in __call__
return self._cached_call(args, kwargs)[0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/me/anaconda3/envs/storm/lib/python3.11/site-packages/joblib/memory.py", line 598, in _cached_call
out, metadata = self.call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/me/anaconda3/envs/storm/lib/python3.11/site-packages/joblib/memory.py", line 856, in call
output = self.func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/me/anaconda3/envs/storm/lib/python3.11/site-packages/dsp/modules/gpt3.py", line 216, in _cached_gpt3_turbo_request_v2
return cast(OpenAIObject, openai.ChatCompletion.create(**kwargs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/me/anaconda3/envs/storm/lib/python3.11/site-packages/openai/api_resources/chat_completion.py", line 25, in create
return super().create(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/me/anaconda3/envs/storm/lib/python3.11/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 155, in create
response, _, api_key = requestor.request(
^^^^^^^^^^^^^^^^^^
File "/home/me/anaconda3/envs/storm/lib/python3.11/site-packages/openai/api_requestor.py", line 299, in request
resp, got_stream = self._interpret_response(result, stream)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/me/anaconda3/envs/storm/lib/python3.11/site-packages/openai/api_requestor.py", line 710, in _interpret_response
self._interpret_response_line(
File "/home/me/anaconda3/envs/storm/lib/python3.11/site-packages/openai/api_requestor.py", line 775, in _interpret_response_line
raise self.handle_error_response(
openai.error.InvalidRequestError: Invalid URL (POST /v1/engines/gpt-3.5-turbo-0125/chat/completions)
Also, storm_gen_outline.txt
and direct_gen_outline.txt
are not being produced by the run_prewriting
script.
Let me know if there's anything else you'd like me to adjust/try.
We have updated the repo and wrapped up as a python package for easier installation. Checkout the updated documentation here: https://github.com/stanford-oval/storm?tab=readme-ov-file#installation. Feel free to open another issue if you run into any issue.
Description: I encountered an error when trying to run the run_prewriting.py script with the gpt-3.5-turbo engine.
I followed the setup instructions in the README, including:
To my error Reproduce:
Environment:
Python version: 3.11 Operating System: Windows
Summary: Got a a few errors when running
run_prewriting
, despite these errors the script does product aconversation_log.json
andraw_search_results.json
which look ok.I then try to run
run_writing
but this completely fails due toFileNotFoundError: [Errno 2] No such file or directory: '../results\\The_promise_and_technical_difficulties_of_SSTO_vehicles\\raw_search_results.json'
,Could aynone please advise on how to resolve these errors? Let me know if any additional information or logs would be helpful for troubleshooting.