astronomer / astronomer-cosmos

Run your dbt Core projects as Apache Airflow DAGs and Task Groups with a few lines of code
https://astronomer.github.io/astronomer-cosmos/
Apache License 2.0
749 stars 170 forks source link

Establish general pattern for uploading artifacts to storage #894

Open dwreeves opened 8 months ago

dwreeves commented 8 months ago

Overview

This issue should be seen as a higher-level, more general offshoot of #870. Basically, as the need to upload more files to cloud storage increases, the patterns around this should be more firmly established and generalized.

Right now there is a DbtDocsOperator that loads files created by dbt into cloud storage.

Additionally, there is a need to be able to run dbt parse or dbt compile inside the Cosmos runtime and upload artifacts from these runs; see: #870 for why.

Considerations

The high level API for DbtDocsOperator is pretty reasonable, and solving this issue can utilize the same high level API as the DbtDocsOperator. That said, there are some minor nitpicks I'd have about the current API:

^ I think trying to support all of these API nitpicks may not be useful for most people, and perhaps e.g. logs/ upload should not be supported and dbt run should not be able to upload artifacts, but it'd be worth explicitly establishing all of that.

Some additional requirements and considerations for uploading files:

Some additional requirements for the interface:

I don't have answers, just questions. I'm hoping this issue can be a place to discuss what to do.

dosubot[bot] commented 8 months ago

Hey @dwreeves, great to see you diving into another intriguing challenge over here!

Thank you for your detailed request. I agree that establishing a general pattern for uploading artifacts to cloud storage is a crucial enhancement for the Cosmos repository. Here's a proposed approach considering the current state of the Cosmos codebase and your requirements:

  1. Extend the existing DbtDocsOperator: The current DbtDocsOperator is implemented as three specific operators for different cloud storage services: DbtDocsS3Operator, DbtDocsAzureStorageOperator, and DbtDocsGCSOperator. We can extend these operators to handle the uploading of artifacts generated by dbt parse and dbt compile commands. This would involve modifying the execute method of these operators to run the desired dbt command, identify the generated artifacts, and upload them to the respective cloud storage service.

  2. Create a generalized ArtifactUploader class: This class would abstract the process of identifying and uploading artifacts. It could be initialized with a list of artifact file paths (or patterns) to upload and a cloud storage path template. The upload method of this class would then handle the process of uploading the identified artifacts to the templated cloud storage path. This class could also handle the checking of global flags to determine which artifacts to upload.

  3. Integrate ArtifactUploader with DbtDocsOperator: The DbtDocsOperator (and its specific implementations) could be modified to utilize the ArtifactUploader class. After running the dbt command, the operator would create an instance of ArtifactUploader, passing the list of artifacts to upload and the templated cloud storage path. It would then call the upload method of the ArtifactUploader to upload the artifacts.

Here's a conceptual example of how this might look:

class ArtifactUploader:
    def __init__(self, artifact_paths: List[str], storage_path_template: str):
        self.artifact_paths = artifact_paths
        self.storage_path_template = storage_path_template

    def upload(self):
        # Identify artifacts based on self.artifact_paths
        # Upload each artifact to the templated storage path

class DbtDocsS3Operator(BaseOperator):
    def execute(self, context):
        # Run dbt command
        # ...

        # Upload artifacts
        uploader = ArtifactUploader(artifact_paths=["target/*", "logs/*"], storage_path_template="dbt_artifacts/{{ run_id }}/")
        uploader.upload()

This approach would provide a flexible and extensible way to handle the uploading of artifacts to cloud storage, while also addressing the API nitpicks you mentioned. It would allow for the uploading of artifacts from any dbt command, not just dbt docs generate, and it would provide a clear and consistent way to specify which artifacts to upload and where to upload them.

Please let me know if this approach aligns with your vision, or if there are any other considerations or requirements I should take into account.

Sources

#### About Dosu This response is meant to be useful and save you time. It is **not meant to be a precise solution**, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot.
dwreeves commented 8 months ago

First time Dosu did a good job for one of my issues. Thank you Dosu!

dosubot[bot] commented 8 months ago

Thank you, @dwreeves! I'm glad to hear that the proposed approach aligns with your vision. If you have any further questions or need additional assistance as you work through this enhancement, feel free to reach out. I'm here to help!

#### About Dosu This response is meant to be useful and save you time. It is **not meant to be a precise solution**, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot.
tatiana commented 5 months ago

Relates to #927