Pythagora-io / gpt-pilot

The first real AI developer
Other
29.97k stars 2.98k forks source link

[Bug]: KeyError: 'user_feedback_qa' #1025

Open cranyy opened 3 months ago

cranyy commented 3 months ago

Version

VisualStudio Code extension

Operating System

Windows 10

What happened?


[Pythagora] Stopping Pythagora due to error:

File `core/cli/main.py`, line 37, in run_project
    success = await orca.run()
File `core/agents/orchestrator.py`, line 67, in run
    response = await agent.run()
File `core/agents/code_monkey.py`, line 32, in run
    return await self.implement_changes()
File `core/agents/code_monkey.py`, line 63, in implement_changes
    user_feedback_qa = iterations[-1]["user_feedback_qa"]
KeyError: 'user_feedback_qa'

(env) E:\Project\gpt-pilot>
```    ----------happens when i answer no no no no multiple times on some stupid suggestions, cant undo cant go back any steps always leads to the above
DaddyWozBucks commented 3 months ago

Same issue using repository except it happened without any inputs during code review. Restarting immediately fails again with the same message

Badlee2020 commented 3 months ago

same here

DaveTacker commented 3 months ago

Seems to be working again after making this change to core/agents/code_monkey.py L60:

if len(iterations) > 0:
    if "description" in iterations[-1]:
        instructions = iterations[-1]["description"]
    if "user_feedback" in iterations[-1]:
        user_feedback = iterations[-1]["user_feedback"]
    if "user_feedback_qa" in iterations[-1]:
        user_feedback_qa = iterations[-1]["user_feedback_qa"]
DrivenIdeaLab commented 3 months ago

Resolved., Similar resolution to @DaveTacker

`from os.path import basename

from pydantic import BaseModel, Field

from core.agents.base import BaseAgent from core.agents.convo import AgentConvo from core.agents.response import AgentResponse, ResponseType from core.config import DESCRIBE_FILES_AGENT_NAME from core.llm.parser import JSONParser, OptionalCodeBlockParser from core.log import get_logger

log = get_logger(name)

class FileDescription(BaseModel): summary: str = Field( description="Detailed description summarized what the file is about, and what the major classes, functions, elements or other functionality is implemented." ) references: list[str] = Field( description="List of references the file imports or includes (only files local to the project), where each element specifies the project-relative path of the referenced file, including the file extension." )

class CodeMonkey(BaseAgent): agent_type = "code-monkey" display_name = "Code Monkey"

async def run(self) -> AgentResponse:
    if self.prev_response and self.prev_response.type == ResponseType.DESCRIBE_FILES:
        return await self.describe_files()
    else:
        return await self.implement_changes()

async def implement_changes(self) -> AgentResponse:
    file_name = self.step["save_file"]["path"]

    current_file = await self.state_manager.get_file_by_path(file_name)
    file_content = current_file.content.content if current_file else ""

    task = self.current_state.current_task

    if self.prev_response and self.prev_response.type == ResponseType.CODE_REVIEW_FEEDBACK:
        attempt = self.prev_response.data["attempt"] + 1
        feedback = self.prev_response.data["feedback"]
        log.debug(f"Fixing file {file_name} after review feedback: {feedback} ({attempt}. attempt)")
        await self.send_message(f"Reworking changes I made to {file_name} ...")
    else:
        log.debug(f"Implementing file {file_name}")
        await self.send_message(f"{'Updating existing' if file_content else 'Creating new'} file {file_name} ...")
        self.next_state.action = (
            f'Update file "{basename(file_name)}"' if file_content else f'Create file "{basename(file_name)}"'
        )
        attempt = 1
        feedback = None

    iterations = self.current_state.iterations
    user_feedback = None
    user_feedback_qa = None
    llm = self.get_llm()
    if iterations:
        instructions = iterations[-1].get("description")
        user_feedback = iterations[-1].get("user_feedback")
        user_feedback_qa = iterations[-1].get("user_feedback_qa")
    else:
        instructions = self.current_state.current_task["instructions"]

    convo = AgentConvo(self).template(
        "implement_changes",
        file_name=file_name,
        file_content=file_content,
        instructions=instructions,
        user_feedback=user_feedback,
        user_feedback_qa=user_feedback_qa,
    )
    if feedback:
        convo.assistant(f"```\n{self.prev_response.data['new_content']}\n```\n").template(
            "review_feedback",
            content=self.prev_response.data["approved_content"],
            original_content=file_content,
            rework_feedback=feedback,
        )

    response: str = await llm(convo, temperature=0, parser=OptionalCodeBlockParser())
    # FIXME: provide a counter here so that we don't have an endless loop here
    return AgentResponse.code_review(self, file_name, task["instructions"], file_content, response, attempt)

async def describe_files(self) -> AgentResponse:
    llm = self.get_llm(DESCRIBE_FILES_AGENT_NAME)
    to_describe = {
        file.path: file.content.content for file in self.current_state.files if not file.meta.get("description")
    }

    for file in self.next_state.files:
        content = to_describe.get(file.path)
        if content is None:
            continue

        if content == "":
            file.meta = {
                **file.meta,
                "description": "Empty file",
                "references": [],
            }
            continue

        log.debug(f"Describing file {file.path}")
        await self.send_message(f"Describing file {file.path} ...")

        convo = (
            AgentConvo(self)
            .template(
                "describe_file",
                path=file.path,
                content=content,
            )
            .require_schema(FileDescription)
        )
        llm_response: FileDescription = await llm(convo, parser=JSONParser(spec=FileDescription))

        file.meta = {
            **file.meta,
            "description": llm_response.summary,
            "references": llm_response.references,
        }
    return AgentResponse.done(self)

`

RoseSamaras commented 3 months ago

Here is the whole corrected code_monkey.py in gpt-pilot/core/agents

from os.path import basename

from pydantic import BaseModel, Field

from core.agents.base import BaseAgent from core.agents.convo import AgentConvo from core.agents.response import AgentResponse, ResponseType from core.config import DESCRIBE_FILES_AGENT_NAME from core.llm.parser import JSONParser, OptionalCodeBlockParser from core.log import get_logger

log = get_logger(name)

class FileDescription(BaseModel): summary: str = Field( description="Detailed description summarized what the file is about, and what the major classes, functions, elements or other functionality is implemented." ) references: list[str] = Field( description="List of references the file imports or includes (only files local to the project), where each element specifies the project-relative path of the referenced file, including the file extension." )

class CodeMonkey(BaseAgent): agent_type = "code-monkey" display_name = "Code Monkey"

async def run(self) -> AgentResponse:
    if self.prev_response and self.prev_response.type == ResponseType.DESCRIBE_FILES:
        return await self.describe_files()
    else:
        return await self.implement_changes()

async def implement_changes(self) -> AgentResponse:
    file_name = self.step["save_file"]["path"]

    current_file = await self.state_manager.get_file_by_path(file_name)
    file_content = current_file.content.content if current_file else ""

    task = self.current_state.current_task

    if self.prev_response and self.prev_response.type == ResponseType.CODE_REVIEW_FEEDBACK:
        attempt = self.prev_response.data["attempt"] + 1
        feedback = self.prev_response.data["feedback"]
        log.debug(f"Fixing file {file_name} after review feedback: {feedback} ({attempt}. attempt)")
        await self.send_message(f"Reworking changes I made to {file_name} ...")
    else:
        log.debug(f"Implementing file {file_name}")
        await self.send_message(f"{'Updating existing' if file_content else 'Creating new'} file {file_name} ...")
        self.next_state.action = (
            f'Update file "{basename(file_name)}"' if file_content else f'Create file "{basename(file_name)}"'
        )
        attempt = 1
        feedback = None

    iterations = self.current_state.iterations
    user_feedback = None
    user_feedback_qa = None
    llm = self.get_llm()
    if iterations:
        if "description" in iterations[-1]:
            instructions = iterations[-1]["description"]
        if "user_feedback" in iterations[-1]:
            user_feedback = iterations[-1]["user_feedback"]
        if "user_feedback_qa" in iterations[-1]:
            user_feedback_qa = iterations[-1]["user_feedback_qa"]
    else:
        instructions = self.current_state.current_task["instructions"]

    convo = AgentConvo(self).template(
        "implement_changes",
        file_name=file_name,
        file_content=file_content,
        instructions=instructions,
        user_feedback=user_feedback,
        user_feedback_qa=user_feedback_qa,
    )
    if feedback:
        convo.assistant(f"```\n{self.prev_response.data['new_content']}\n```\n").template(
            "review_feedback",
            content=self.prev_response.data["approved_content"],
            original_content=file_content,
            rework_feedback=feedback,
        )

    response: str = await llm(convo, temperature=0, parser=OptionalCodeBlockParser())
    # FIXME: provide a counter here so that we don't have an endless loop here
    return AgentResponse.code_review(self, file_name, task["instructions"], file_content, response, attempt)

async def describe_files(self) -> AgentResponse:
    llm = self.get_llm(DESCRIBE_FILES_AGENT_NAME)
    to_describe = {
        file.path: file.content.content for file in self.current_state.files if not file.meta.get("description")
    }

    for file in self.next_state.files:
        content = to_describe.get(file.path)
        if content is None:
            continue

        if content == "":
            file.meta = {
                **file.meta,
                "description": "Empty file",
                "references": [],
            }
            continue

        log.debug(f"Describing file {file.path}")
        await self.send_message(f"Describing file {file.path} ...")

        convo = (
            AgentConvo(self)
            .template(
                "describe_file",
                path=file.path,
                content=content,
            )
            .require_schema(FileDescription)
        )
        llm_response: FileDescription = await llm(convo, parser=JSONParser(spec=FileDescription))

        file.meta = {
            **file.meta,
            "description": llm_response.summary,
            "references": llm_response.references,
        }
    return AgentResponse.done(self)
shakib5326 commented 3 months ago

`from os.path import basename from pydantic import BaseModel, Field from core.agents.base import BaseAgent from core.agents.convo import AgentConvo from core.agents.response import AgentResponse, ResponseType from core.config import DESCRIBE_FILES_AGENT_NAME from core.llm.parser import JSONParser, OptionalCodeBlockParser from core.log import get_logger

log = get_logger(name)

class FileDescription(BaseModel): summary: str = Field( description="Detailed description summarized what the file is about, and what the major classes, functions, elements or other functionality is implemented." ) references: list[str] = Field( description="List of references the file imports or includes (only files local to the project), where each element specifies the project-relative path of the referenced file, including the file extension." )

class CodeMonkey(BaseAgent): agent_type = "code-monkey" display_name = "Code Monkey"

async def run(self) -> AgentResponse:
    if self.prev_response and self.prev_response.type == ResponseType.DESCRIBE_FILES:
        return await self.describe_files()
    else:
        return await self.implement_changes()

async def implement_changes(self) -> AgentResponse:
    file_name = self.step["save_file"]["path"]

    current_file = await self.state_manager.get_file_by_path(file_name)
    file_content = current_file.content.content if current_file else ""

    task = self.current_state.current_task

    if self.prev_response and self.prev_response.type == ResponseType.CODE_REVIEW_FEEDBACK:
        attempt = self.prev_response.data["attempt"] + 1
        feedback = self.prev_response.data["feedback"]
        log.debug(f"Fixing file {file_name} after review feedback: {feedback} ({attempt}. attempt)")
        await self.send_message(f"Reworking changes I made to {file_name} ...")
    else:
        log.debug(f"Implementing file {file_name}")
        await self.send_message(f"{'Updating existing' if file_content else 'Creating new'} file {file_name} ...")
        self.next_state.action = (
            f'Update file "{basename(file_name)}"' if file_content else f'Create file "{basename(file_name)}"'
        )
        attempt = 1
        feedback = None

    iterations = self.current_state.iterations
    user_feedback = None
    user_feedback_qa = None
    llm = self.get_llm()
    if iterations:
        last_iteration = iterations[-1]
        instructions = last_iteration.get("description", "")
        user_feedback = last_iteration.get("user_feedback", None)
        user_feedback_qa = last_iteration.get("user_feedback_qa", None)
    else:
        instructions = self.current_state.current_task["instructions"]

    convo = AgentConvo(self).template(
        "implement_changes",
        file_name=file_name,
        file_content=file_content,
        instructions=instructions,
        user_feedback=user_feedback,
        user_feedback_qa=user_feedback_qa,
    )
    if feedback:
        convo.assistant(f"```\n{self.prev_response.data['new_content']}\n```\n").template(
            "review_feedback",
            content=self.prev_response.data["approved_content"],
            original_content=file_content,
            rework_feedback=feedback,
        )

    response: str = await llm(convo, temperature=0, parser=OptionalCodeBlockParser())
    # FIXME: provide a counter here so that we don't have an endless loop here
    return AgentResponse.code_review(self, file_name, task["instructions"], file_content, response, attempt)

async def describe_files(self) -> AgentResponse:
    llm = self.get_llm(DESCRIBE_FILES_AGENT_NAME)
    to_describe = {
        file.path: file.content.content for file in self.current_state.files if not file.meta.get("description")
    }

    for file in self.next_state.files:
        content = to_describe.get(file.path)
        if content is None:
            continue

        if content == "":
            file.meta = {
                **file.meta,
                "description": "Empty file",
                "references": [],
            }
            continue

        log.debug(f"Describing file {file.path}")
        await self.send_message(f"Describing file {file.path} ...")

        convo = (
            AgentConvo(self)
            .template(
                "describe_file",
                path=file.path,
                content=content,
            )
            .require_schema(FileDescription)
        )
        llm_response: FileDescription = await llm(convo, parser=JSONParser(spec=FileDescription))

        file.meta = {
            **file.meta,
            "description": llm_response.summary,
            "references": llm_response.references,
        }
    return AgentResponse.done(self)

`

This version of the CodeMonkey class ensures that the keys description, user_feedback, and user_feedback_qa are accessed safely using the get method, preventing KeyError exceptions if these keys are missing.

PNdlovu commented 3 months ago

This is strange because I had the same solution from chatgtp but it didn't work and even after copying your code still does not work. pythagora won't load the window to upload or create a new application. I wonder how its gonna be before the bug is ironed out. I have tried this in both windows 10 and 11. please help mybe am doing something wrong

DaveTacker commented 3 months ago

Is it exactly the same error?

On Thu, Jun 27, 2024, 7:34 AM Philani @.***> wrote:

This is strange because I had the same solution from chatgtp but it didn't work and even after copying your code still does not work. pythagora won't load the window to upload or create a new application. I wonder how its gonna be before the bug is ironed out. I have tried this in both windows 10 and 11. please help mybe am doing something wrong

— Reply to this email directly, view it on GitHub https://github.com/Pythagora-io/gpt-pilot/issues/1025#issuecomment-2195285454, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAHXDSJO22FHCZEH3XJOHK3ZJREJNAVCNFSM6AAAAABJSFIZFCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCOJVGI4DKNBVGQ . You are receiving this because you were mentioned.Message ID: @.***>

PNdlovu commented 2 months ago

Is it exactly the same error? yes

On the side note: I have realised that the errors are caused by skipping filling out .env variables. That means you have to have all the services set before you start coding and fillout all the required information, then you are good to go. I have not had those errors anymore.

bibop commented 2 months ago

Is it exactly the same error? yes

On the side note: I have realised that the errors are caused by skipping filling out .env variables. That means you have to have all the services set before you start coding and fillout all the required information, then you are good to go. I have not had those errors anymore.

I had the same problem. How do you ensure to have all the services set before you start coding and fillout all the required information?? I have everything set up in .env and I still receive these errors.