Dylanb-dev / testgpt

0 stars 0 forks source link

Fix #1: Add todo #3

Closed github-actions[bot] closed 1 year ago

github-actions[bot] commented 1 year ago

AutoPR Failure

Fixes #1

Status

This pull request was being autonomously generated by AutoPR, but it encountered an error.

Error:

Traceback (most recent call last):
  File "/app/autopr/services/action_service.py", line 186, in run_action
    results = action.run(arguments, context)
  File "/app/autopr/actions/look_at_files.py", line 375, in run
    filepaths = self.get_initial_filepaths(files, context)
  File "/app/autopr/actions/look_at_files.py", line 276, in get_initial_filepaths
    response = self.rail_service.run_prompt_rail(
  File "/app/autopr/services/rail_service.py", line 355, in run_prompt_rail
    prompt = self.completions_repo.complete(
  File "/app/autopr/repos/completions_repo.py", line 84, in complete
    raise e
  File "/app/autopr/repos/completions_repo.py", line 65, in complete
    result = self._complete(
  File "/venv/lib/python3.9/site-packages/tenacity/__init__.py", line 289, in wrapped_f
    return self(f, *args, **kw)
  File "/venv/lib/python3.9/site-packages/tenacity/__init__.py", line 379, in __call__
    do = self.iter(retry_state=retry_state)
  File "/venv/lib/python3.9/site-packages/tenacity/__init__.py", line 314, in iter
    return fut.result()
  File "/usr/local/lib/python3.9/concurrent/futures/_base.py", line 439, in result
    return self.__get_result()
  File "/usr/local/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result
    raise self._exception
  File "/venv/lib/python3.9/site-packages/tenacity/__init__.py", line 382, in __call__
    result = fn(*args, **kwargs)
  File "/app/autopr/repos/completions_repo.py", line 142, in _complete
    openai_response = openai.ChatCompletion.create(
  File "/venv/lib/python3.9/site-packages/openai/api_resources/chat_completion.py", line 25, in create
    return super().create(*args, **kwargs)
  File "/venv/lib/python3.9/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
    response, _, api_key = requestor.request(
  File "/venv/lib/python3.9/site-packages/openai/api_requestor.py", line 230, in request
    resp, got_stream = self._interpret_response(result, stream)
  File "/venv/lib/python3.9/site-packages/openai/api_requestor.py", line 624, in _interpret_response
    self._interpret_response_line(
  File "/venv/lib/python3.9/site-packages/openai/api_requestor.py", line 687, in _interpret_response_line
    raise self.handle_error_response(
openai.error.InvalidRequestError: The model `gpt-4` does not exist or you do not have access to it. Learn more: https://help.openai.com/en/articles/7102672-how-can-i-access-gpt-4.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/app/autopr/services/agent_service.py", line 78, in run_agent
    agent.handle_event(event)
  File "/app/autopr/agents/plan_and_code.py", line 216, in handle_event
    self.create_pull_request(event)
  File "/app/autopr/agents/plan_and_code.py", line 176, in create_pull_request
    context = self.action_service.run_actions_iteratively(
  File "/app/autopr/services/action_service.py", line 215, in run_actions_iteratively
    context = self.run_action(action_id, context)
  File "/app/autopr/services/action_service.py", line 195, in run_action
    self.publish_service.end_section(f"❌ Failed {action_id}")
  File "/app/autopr/services/publish_service.py", line 237, in end_section
    raise ValueError("Cannot end root section")
ValueError: Cannot end root section

Please open an issue to report this.

📖 Looking at files >
> 💬 Asking for InitialFileSelect > > >
> > Prompt > > > > ~~~ > > Hey, somebody just opened an issue in my repo, could you help me write a pull request? > > > > Given context variables enclosed by +-+: > > > > Issue: > > +-+ > > #1 Add todo > > > > Dylanb-dev: Add a todo list that saves to the database > > > > > > +-+ > > > > The list of files in the repo is: > > ```db.sqlite3 (0 tokens) > > manage.py (247 tokens) > > testgpt/__init__.py (0 tokens) > > testgpt/asgi.py (133 tokens) > > testgpt/settings.py (1341 tokens) > > testgpt/urls.py (259 tokens) > > testgpt/wsgi.py (133 tokens) > > .github/workflows/autopr.yml (632 tokens)``` > > > > Should we take a look at any files? If so, pick only a few files (max 5000 tokens). > > Respond with a very short rationale, and a list of files. > > If looking at files would be a waste of time with regard to the issue, respond with an empty list. > > ~~~ > > > >

⚠️⚠️⚠️ Your OpenAI API key does not have access to the gpt-4 model. Please note that ChatGPT Plus does not give you access to the gpt-4 API; you need to sign up on the GPT-4 API waitlist.

Error ~~~python Traceback (most recent call last): File "/app/autopr/services/action_service.py", line 186, in run_action results = action.run(arguments, context) File "/app/autopr/actions/look_at_files.py", line 375, in run filepaths = self.get_initial_filepaths(files, context) File "/app/autopr/actions/look_at_files.py", line 276, in get_initial_filepaths response = self.rail_service.run_prompt_rail( File "/app/autopr/services/rail_service.py", line 355, in run_prompt_rail prompt = self.completions_repo.complete( File "/app/autopr/repos/completions_repo.py", line 84, in complete raise e File "/app/autopr/repos/completions_repo.py", line 65, in complete result = self._complete( File "/venv/lib/python3.9/site-packages/tenacity/__init__.py", line 289, in wrapped_f return self(f, *args, **kw) File "/venv/lib/python3.9/site-packages/tenacity/__init__.py", line 379, in __call__ do = self.iter(retry_state=retry_state) File "/venv/lib/python3.9/site-packages/tenacity/__init__.py", line 314, in iter return fut.result() File "/usr/local/lib/python3.9/concurrent/futures/_base.py", line 439, in result return self.__get_result() File "/usr/local/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result raise self._exception File "/venv/lib/python3.9/site-packages/tenacity/__init__.py", line 382, in __call__ result = fn(*args, **kwargs) File "/app/autopr/repos/completions_repo.py", line 142, in _complete openai_response = openai.ChatCompletion.create( File "/venv/lib/python3.9/site-packages/openai/api_resources/chat_completion.py", line 25, in create return super().create(*args, **kwargs) File "/venv/lib/python3.9/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create response, _, api_key = requestor.request( File "/venv/lib/python3.9/site-packages/openai/api_requestor.py", line 230, in request resp, got_stream = self._interpret_response(result, stream) File "/venv/lib/python3.9/site-packages/openai/api_requestor.py", line 624, in _interpret_response self._interpret_response_line( File "/venv/lib/python3.9/site-packages/openai/api_requestor.py", line 687, in _interpret_response_line raise self.handle_error_response( openai.error.InvalidRequestError: The model `gpt-4` does not exist or you do not have access to it. Learn more: https://help.openai.com/en/articles/7102672-how-can-i-access-gpt-4. ~~~