creightontaylor / simple_website

0 stars 0 forks source link

Daily GitHub Summary Report #19

Open creightontaylor opened 2 months ago

creightontaylor commented 2 months ago

Description:

Create a simple automated task that demonstrates Sif Task Force's ability to handle repetitive development tasks. The task involves generating a daily summary report from a GitHub repository, including the number of commits, open issues, and closed issues. The report should be formatted in markdown and saved in the repository.

Background/Context:

This demo aims to showcase the basic automation capabilities of Sif Task Force, allowing developers to see how the tool can offload routine tasks. The focus should be on simplicity and clear demonstration of the automation process.

Task Details:

Task Name: Generate Daily GitHub Summary Report Frequency: Daily Repository: https://github.com/creightontaylor/simple_website Content of the Report: Date of the report Number of commits made on that day List of commits with commit messages and author names Number of open issues Number of closed issues Format: Markdown Destination: Save the report as daily_summary.md in the root of the repository. Steps to Implement:

Setup Task:

Configure the task to run daily at a specified time. Ensure the task can authenticate and access the provided GitHub repository. Data Collection:

Fetch the number of commits made on the day. Retrieve commit messages and author names. Count the number of open issues. Count the number of closed issues. Report Generation:

Format the collected data into a markdown report. Include the date, commit count, commit details, and issue counts. Saving the Report:

Save the generated markdown report as daily_summary.md in the root directory of the repository. Acceptance Criteria:

Report Accuracy:

The report should accurately reflect the number of commits, open issues, and closed issues for the day. Report Formatting:

The report should be well-formatted in markdown. Automation:

The task should run automatically at the specified time each day without manual intervention. Saving the Report:

The report should be correctly saved in the specified location within the repository. User Stories:

As a developer, I want to see how Sif Task Force can automate daily reporting tasks so that I can save time on routine activities. As a project manager, I want to receive a daily summary report of the repository to stay updated on the project's progress without manually checking GitHub.

creightontaylor commented 2 months ago

1. πŸ“ Indexing

I'm indexing the files within your repository.

Successfully indexed!

2. πŸ”Ž Searching

I'm searching for relevant snippets in your repository.

It seems there was a misunderstanding in the task request. My capabilities as an AI developed by OpenAI are focused on processing and generating text-based information, including code analysis and providing insights or solutions based on the given data. However, I don't have the ability to directly interact with external systems, such as GitHub repositories, to fetch data, run scripts, or create files like daily_summary.md in real-time or through automated daily tasks.

To achieve the goal of creating an automated daily summary report for a GitHub repository, you would typically use a combination of GitHub's API for fetching commit and issue data, a scheduled task runner (like GitHub Actions, Jenkins, or a simple cron job), and a script written in a language like Python or JavaScript to process the data and generate the report.

Here's a high-level overview of how you could set up such a process:

  1. GitHub API: Use the GitHub API to fetch the day's commits and issue data. The GitHub API provides endpoints for retrieving repository commits (GET /repos/:owner/:repo/commits) and issues (GET /repos/:owner/:repo/issues).

  2. Scheduled Task Runner: Set up a scheduled task (using GitHub Actions, for example) to run at a specified time each day. This task will execute your script to generate the daily summary.

  3. Scripting: Write a script that uses the GitHub API to fetch the necessary data, formats it into a markdown report, and then commits this report back to the repository as daily_summary.md. This script can be written in any language that can make HTTP requests and process JSON data, such as Python or JavaScript.

  4. Permissions: Ensure the script or GitHub Action has the necessary permissions to access repository data and commit files. This might involve generating a personal access token or using a GitHub App with the appropriate permissions.

  5. Markdown Formatting: The script should format the fetched data into markdown. This involves creating a markdown string that includes the date, commit messages, author names, the current number of open issues, and the number of issues closed that day.

  6. Committing the Report: Finally, the script should commit the generated daily_summary.md file back to the repository. This can be done using git commands within the script or via the GitHub API.

While I can help draft the script or provide code snippets for specific parts of this process, I cannot execute or automate tasks directly within external systems like GitHub. If you need assistance with writing the script or setting up the automation process, please let me know how I can assist further!

View Snippets

3. 🌐 Planning

I'm creating a plan for the discussion and coding steps.

πŸ’‘ Steps

Step 1 - Create GitHub Actions Workflow File

5. 🎲 Assignment

Here is the finalized workflow:

Create GitHub Actions Workflow File **Action:** `create` **File:** `.github/workflows/daily_summary_report.yml` **Lines:** `[1, 20]` **Dependencies:** `[]` **Instructions:** `Create a new GitHub Actions workflow file named 'daily_summary_report.yml' in the '.github/workflows' directory. This file will define the automation for generating the daily summary report. Start by setting up the name of the workflow and scheduling it to run at a specific time daily using the cron syntax in the 'on' section. For example, to run the workflow every day at 00:00 UTC, use 'schedule: - cron: '0 0 * * *''. Also, specify the job name, runs-on, and steps required to checkout the repository, set up Python, and execute the script that will generate the report.` **Agent:** `Software engineer`
Create Python Script for Generating Report **Action:** `create` **File:** `scripts/generate_daily_summary.py` **Lines:** `[1, 50]` **Dependencies:** `['Step 1']` **Instructions:** `Create a Python script named 'generate_daily_summary.py' in a new 'scripts' directory. This script will use the GitHub API to fetch the number of commits, open issues, and closed issues for the day. It should also format this information into a markdown report. Include necessary imports such as 'requests' for API calls and 'datetime' to work with dates. The script should check if a report for the current day already exists to avoid unnecessary API calls and duplication. If not, it should proceed to generate a new report, format the data into markdown, and save it as 'daily_summary.md' in the repository's root.` **Agent:** `Software engineer`
Modify GitHub Actions Workflow to Run Python Script **Action:** `modify` **File:** `.github/workflows/daily_summary_report.yml` **Lines:** `[15, 19]` **Dependencies:** `['Step 1', 'Step 2']` **Instructions:** `In the existing GitHub Actions workflow file 'daily_summary_report.yml', add steps to install Python dependencies by running 'pip install requests' and then execute the 'generate_daily_summary.py' script. Ensure these steps are added after setting up Python and checking out the repository. Use the 'run' command for both installing dependencies and executing the script. This ensures that the Python script is executed as part of the workflow, allowing the daily summary report to be generated automatically.` **Agent:** `Software engineer`
Add Step to Commit and Push Report File **Action:** `modify` **File:** `.github/workflows/daily_summary_report.yml` **Lines:** `[20, 25]` **Dependencies:** `['Step 3']` **Instructions:** `Modify the GitHub Actions workflow file '.github/workflows/daily_summary_report.yml' to add a final step that commits the newly created 'daily_summary.md' file and pushes it to the repository. This step should use a pre-configured action, such as 'actions/checkout@v2' for checking out the code and 'stefanzweifel/git-auto-commit-action@v4' for committing and pushing changes. Configure the commit message to reflect the nature of the update, for example, 'Update daily summary report for '. This ensures that the generated report is committed back to the repository automatically each day.` **Agent:** `Software engineer`
creightontaylor commented 2 months ago

Could we consider implementing a caching mechanism or checking for existing reports before generating a new one to handle the concern regarding API rate limiting and workflow efficiency? This could potentially reduce unnecessary API calls, especially on days without significant changes.

creightontaylor commented 2 months ago

Considering the workflow and the concerns listed, I suggest incorporating a more detailed error handling and logging mechanism within both the GitHub Actions workflow and the Python script. Specifically, for the GitHub Actions workflow, we can add a step that captures any failure in previous steps and sends a notification (e.g., email, Slack) to the relevant stakeholders with details of the failure. This ensures quick response to failures. For the Python script, besides implementing try-except blocks, it would be beneficial to include detailed logging for each step of the data fetching and processing. This can help in troubleshooting and understanding the script's behavior over time. Additionally, considering the concern about API rate limiting, implementing a caching mechanism or checking for existing reports before fetching new data could significantly reduce the number of API calls, thus mitigating the risk of hitting the rate limit. Lastly, it's crucial to ensure that the documentation is comprehensive, covering setup, execution, error handling, and troubleshooting guidelines to facilitate easy maintenance and updates by other developers.

creightontaylor commented 2 months ago

Considering the workflow and the listed concerns, I recommend integrating a more comprehensive error handling and logging mechanism across the GitHub Actions workflow and the Python script. Specifically, for the GitHub Actions workflow, it would be beneficial to include a step that captures any failures in previous steps and sends a detailed notification to the relevant stakeholders. This could be implemented using GitHub's built-in actions or third-party services for notifications like emails or Slack messages. For the Python script, implementing detailed logging for each step of the data fetching and processing would not only aid in troubleshooting but also in understanding the script's behavior over time. Additionally, to address the concern about API rate limiting, introducing a caching mechanism or a check for existing reports before fetching new data could significantly reduce the number of API calls, thus mitigating the risk of hitting the rate limit. This could be achieved by storing a timestamp of the last successful run and comparing it before making new API requests. Lastly, ensuring that the documentation is comprehensive, covering setup, execution, error handling, and troubleshooting guidelines, will facilitate easy maintenance and updates by other developers. This approach not only addresses several concerns listed but also enhances the overall robustness and maintainability of the automation process.

creightontaylor commented 2 months ago

Considering the concerns about API rate limiting and workflow efficiency, I suggest implementing a more robust caching mechanism within the Python script. This could involve checking for existing reports and only fetching new data if there are updates since the last report. Additionally, to address the error handling and logging concerns, it might be beneficial to enhance the Python script and GitHub Actions workflow with more detailed logging and error handling mechanisms. For the GitHub Actions workflow, we could add a step that captures any failure and sends a notification to the team. For the Python script, implementing comprehensive try-except blocks and logging each step could greatly improve troubleshooting and maintenance. This approach not only addresses the concerns raised but also optimizes the workflow, making it more efficient and reliable.

creightontaylor commented 2 months ago

@creightontaylor The draft PR https://github.com/creightontaylor/simple_website/pull/21 from the issue https://github.com/creightontaylor/simple_website/issues/19 was created by SIF TaskForce. Please review and add comments. When it is ready for final review, please click on the 'Ready for Review' button and SIF TaskForce will take care of the rest.