runtimerevolution / labs

0 stars 0 forks source link

LLM Interface: Use LLM to validate + clean generated output #41

Closed sIldefonsoRR closed 3 months ago

sIldefonsoRR commented 4 months ago

The idea is to have LLMs getting responses from code generation messages and:

validate the output

  • Check if the code is in-line with the request
  • Check if the code is presentable and runnable

clean the output

  • Remove unnecessary texts from the code

Find and record the most frequent validations and cleanings, to be used in future actions.

ghost commented 4 months ago

By specifying the prompt we have achieved a yaml output that allows to then identify the desired actions:

prompt = f"""
    You're a diligent software engineer AI. You can't see, draw, or interact with a 
    browser, but you can read and write files, and you can think.
    You've been given the following task: {nlp_summary}.Ypur answer will be in yaml format. Please provide a list of actions to perform in order to complete it, considering the current project.
    Each action should contain two fields: action, which is either create or modify,and args, which is a map of key-value pairs, specifying the arguments for that action:
    path - the path of the file to create/modify and content - the content to write to the file.
    Please don't add any text formatting to the answer, making it as clean as possible.
    """

The next step would be to include any required imports and to validate ´\n´type os chars

ghost commented 4 months ago

Additionally, the modifying files section, needs to be discussed, we want the llm to indicates us the lines where to put or to give us the all file modified as a result