neuralmesh / apimesh

Serves as the template to enable llms in any gihub project
GNU General Public License v3.0
0 stars 2 forks source link

write a readme #10

Closed m-c-frank closed 9 months ago

m-c-frank commented 9 months ago

this is the workflow

name: Apimesh Processing

on:
  issues:
    types:
      - labeled
  pull_request:
    types:
      - labeled

jobs:
  process-with-ai:
    runs-on: ubuntu-latest
    if: github.event.label.name == 'apimesh'
    permissions:
      issues: write
    steps:
      - name: Checkout Repository
        uses: actions/checkout@v2

      - name: Setup Python
        uses: actions/setup-python@v2
        with:
          python-version: '3.x'

      - name: Install Python Dependencies
        run: |
          pip install fastapi pydantic langchain openai

      - name: Fetch Issue Data
        env:
          GH_TOKEN: ${{ github.token }}
        run: |
          gh issue view ${{ github.event.issue.number }} --json title,body,comments -t '{{.title}}|{{.body}}{{range .comments}}|{{.body}}{{end}}' > issue_data.txt

      - name: Run Apimesh Python Script
        env:
          OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
        run: python ./apimesh.py "$(cat issue_data.txt)"

      - name: Post AI Response as Comment
        env:
          GH_TOKEN: ${{ github.token }}
        run: |
          gh issue comment ${{ github.event.issue.number }} --body "$(cat ai_response.txt)"
          gh issue edit ${{ github.event.issue.number }} --remove-label "apimesh"

this is the python script:

import os
import sys
from langchain.llms import OpenAI
from langchain.chains import LLMChain
from langchain.prompts import PromptTemplate

# Read issue data from the command line argument
issue_data = sys.argv[1]

# Get the OpenAI API key from environment variables
api_key = os.environ['OPENAI_API_KEY']

# Initialize LangChain with OpenAI
llm = OpenAI(api_key=api_key, temperature=0.7)  # Adjust temperature as needed
prompt_template = "Given the following GitHub issue details: {issue_details}, what would be a good solution? Be concise, neutral and professional. aim for the asymptotically highest information density/text ratio"

# Setup LLMChain
llm_chain = LLMChain(llm=llm, prompt=PromptTemplate.from_template(prompt_template))

# Process issue data
response = llm_chain.predict(issue_details=issue_data)

# Save the response to a file
with open('ai_response.txt', 'w') as file:
    file.write(response)
github-actions[bot] commented 9 months ago

.

This is a GitHub Action workflow that can be used to process an issue labeled "apimesh" using an AI language model. The workflow will check out the repository, setup Python, install Python dependencies, fetch the issue data, run an apimesh Python script, and post an AI response as a comment.

To use this workflow, you will need to set up an environment variable with your OpenAI API key. This can be done in the repository's Secrets tab in the GitHub settings.

Once the workflow is set up, you can simply label an issue with "apimesh" and the workflow will be triggered. The AI response will then be posted as a comment on the issue.

Note that the Python script has to be included in the repository for the workflow to run correctly.

For further customization, you can adjust the temperature of the language model in the script (default is 0.7) and modify the prompt template as needed.

m-c-frank commented 9 months ago

used the comment as the basis for the new README.md