Significant-Gravitas / AutoGPT

AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
https://agpt.co
MIT License
166.45k stars 44.06k forks source link

When running autogpt in pytest, we shouldn't wait for the logs to be written #4199

Closed waynehamadi closed 12 months ago

waynehamadi commented 1 year ago

Duplicates

Summary 💡

if you run python -m pytest -s tests/integration/goal_oriented/test_write_file.py you will see the test takes 6 seconds, althought everything is recorded with cassettes. So why is it taking 6 seconds ? Because we wait for the logs.

Please find a trick to solve that, the less lines of codes needed the better And please make a PR once you find it

Examples 🌈

No response

Motivation 🔦

No response

waynehamadi commented 1 year ago

We need help with this one, high value item.

k-boikov commented 1 year ago

Related https://github.com/Significant-Gravitas/Auto-GPT/issues/4437

Torantulino commented 1 year ago

One possible solution to your issue could be to make use of Python's asynchronous features. In essence, we could leverage asyncio to write the logs asynchronously, so the test doesn't have to wait for the log writing operation to complete.

Here's how you could do it:

log_cycle.py

import json
import os
import asyncio
from typing import Any, Dict, Union

from autogpt.logs import logger

# ... rest of your existing code ...

class LogCycleHandler:
    # ... rest of your existing code ...

    async def log_cycle(
        self,
        ai_name: str,
        created_at: str,
        cycle_count: int,
        data: Union[Dict[str, Any], Any],
        file_name: str,
    ) -> None:
        """
        Log cycle data to a JSON file.

        Args:
            data (Any): The data to be logged.
            file_name (str): The name of the file to save the logged data.
        """
        nested_folder_path = self.create_nested_directory(
            ai_name, created_at, cycle_count
        )

        json_data = json.dumps(data, ensure_ascii=False, indent=4)
        log_file_path = os.path.join(
            nested_folder_path, f"{self.log_count_within_cycle}_{file_name}"
        )

        # call logger.log_json asynchronously
        asyncio.create_task(logger.log_json(json_data, log_file_path))
        self.log_count_within_cycle += 1

Note that this is a basic solution, and you might need to handle exceptions and ensure the asyncio task is completed if your program exits before the task is finished.

Please keep in mind that modifying the logger.log_json function might be needed to support asynchronous operations. Also, this approach should be used carefully as it could potentially cause issues with file I/O if you're writing to the same file concurrently. In your case, as different log files are created for each log, this should not be an issue.

Once you have made the changes, you can create a new branch and commit your changes. Then you can create a Pull Request for the changes to be reviewed and possibly merged into the main codebase.

Let me know if this helps or if you have further questions!


This response was generated by Git-Aid and may not be accurate or appropriate. The author of this repository and the creator of the AI model assume no responsibility or liability for any consequences arising from the use of the information provided in this response. 🤖

collijk commented 1 year ago

Do we need the logs in the test? If not, you could monkeypatch logging to do nothing with the output.

waynehamadi commented 1 year ago

https://github.com/Significant-Gravitas/Auto-GPT/pull/4486

github-actions[bot] commented 1 year ago

This issue has automatically been marked as stale because it has not had any activity in the last 50 days. You can unstale it by commenting or removing the label. Otherwise, this issue will be closed in 10 days.

github-actions[bot] commented 12 months ago

This issue was closed automatically because it has been stale for 10 days with no activity.