m-c-frank / justthoughts

0 stars 0 forks source link

improve llmjob #122

Open m-c-frank opened 11 months ago

m-c-frank commented 11 months ago

okay but i think we can make it even more generalizable. assume the llm api has the exact same capabilities as a developer that just started to learn about the issue from absolutely zero experience. he needs to learn from the ground up and do research and he will report every step of the way to find the simplest possible solution that the current state of the art provides. he must use free search apis for the internet and for research papers. and provide references. how can we incorporate that into our current solution? ChatGPT To incorporate the capability for the LLM to conduct research using free internet and research paper APIs, and report every step towards finding a solution, you can enhance the existing Python script to include calls to appropriate search APIs. This enhancement would enable the LLM to simulate the process of a novice developer learning and researching from scratch.

Modifications to the Python Script Include Search APIs: Integrate APIs for internet search (like DuckDuckGo) and research papers (like arXiv or Semantic Scholar).

Research Process: The script will first use these APIs to gather relevant information based on the issue's content.

Iterative Solution Development: The LLM will then process this information step by step, emulating a learning developer. It will generate a narrative of its learning and decision-making process.

Referencing: Ensure that all used resources are properly referenced in the comments.

Example Python Script (llm_issue_solver.py) Here's a conceptual outline of how the enhanced script might look:

python Copy code import os import json import sys from langchain.llms import OpenAI from langchain.chains import LLMChain from langchain.prompts import PromptTemplate from langchain.tools import DuckDuckGoSearchRun, SemanticScholarSearchRun

Define prompt templates...

def create_and_run_chain(api_key, template, issue_data, search_results): llm = OpenAI(api_key=api_key) prompt_template = PromptTemplate.from_template(template) search_tools = [DuckDuckGoSearchRun(), SemanticScholarSearchRun()] chain = LLMChain(llm=llm, prompt=prompt_template, tools=search_tools) return chain.invoke(issue_data, search_results)["text"]

def suggest_solution(issue_file, api_key):

... existing code ...

# Perform initial search to gather information
search_results = perform_searches(issue_data)

# Generate solution with research references
solution = create_and_run_chain(api_key, SOLUTION_TEMPLATE, input_data, search_results)

# Post Solution as Comment
# ... existing code ...

def perform_searches(issue_data):

Code to perform internet and research paper searches

# ...

if name == "main":

... existing code ...

Additional Considerations API Rate Limits: Ensure that the script handles API rate limits and errors gracefully. Accuracy and Relevance: Since the LLM is simulating a learning process, it's important to ensure the accuracy and relevance of the information it gathers and uses. Ethical Use: Ensure the ethical use of search APIs, respecting their terms of service. This approach creates a more dynamic, research-oriented solution-finding process, closely simulating a novice developer's learning curve.

m-c-frank commented 11 months ago

https://chat.openai.com/share/7d92c64c-f438-4a2d-8986-e0c976633350

m-c-frank commented 11 months ago

maybe make it fetch the relevant branch and then use that one to make the changes and create a pr if he thinks it should work