Open dwreeves opened 8 months ago
Hey @dwreeves, great to see you diving into another intriguing challenge over here!
Thank you for your detailed request. I agree that establishing a general pattern for uploading artifacts to cloud storage is a crucial enhancement for the Cosmos repository. Here's a proposed approach considering the current state of the Cosmos codebase and your requirements:
Extend the existing DbtDocsOperator: The current DbtDocsOperator
is implemented as three specific operators for different cloud storage services: DbtDocsS3Operator
, DbtDocsAzureStorageOperator
, and DbtDocsGCSOperator
. We can extend these operators to handle the uploading of artifacts generated by dbt parse
and dbt compile
commands. This would involve modifying the execute
method of these operators to run the desired dbt command, identify the generated artifacts, and upload them to the respective cloud storage service.
Create a generalized ArtifactUploader class: This class would abstract the process of identifying and uploading artifacts. It could be initialized with a list of artifact file paths (or patterns) to upload and a cloud storage path template. The upload
method of this class would then handle the process of uploading the identified artifacts to the templated cloud storage path. This class could also handle the checking of global flags to determine which artifacts to upload.
Integrate ArtifactUploader with DbtDocsOperator: The DbtDocsOperator
(and its specific implementations) could be modified to utilize the ArtifactUploader
class. After running the dbt command, the operator would create an instance of ArtifactUploader
, passing the list of artifacts to upload and the templated cloud storage path. It would then call the upload
method of the ArtifactUploader
to upload the artifacts.
Here's a conceptual example of how this might look:
class ArtifactUploader:
def __init__(self, artifact_paths: List[str], storage_path_template: str):
self.artifact_paths = artifact_paths
self.storage_path_template = storage_path_template
def upload(self):
# Identify artifacts based on self.artifact_paths
# Upload each artifact to the templated storage path
class DbtDocsS3Operator(BaseOperator):
def execute(self, context):
# Run dbt command
# ...
# Upload artifacts
uploader = ArtifactUploader(artifact_paths=["target/*", "logs/*"], storage_path_template="dbt_artifacts/{{ run_id }}/")
uploader.upload()
This approach would provide a flexible and extensible way to handle the uploading of artifacts to cloud storage, while also addressing the API nitpicks you mentioned. It would allow for the uploading of artifacts from any dbt command, not just dbt docs generate
, and it would provide a clear and consistent way to specify which artifacts to upload and where to upload them.
Please let me know if this approach aligns with your vision, or if there are any other considerations or requirements I should take into account.
First time Dosu did a good job for one of my issues. Thank you Dosu!
Thank you, @dwreeves! I'm glad to hear that the proposed approach aligns with your vision. If you have any further questions or need additional assistance as you work through this enhancement, feel free to reach out. I'm here to help!
Relates to #927
Overview
This issue should be seen as a higher-level, more general offshoot of #870. Basically, as the need to upload more files to cloud storage increases, the patterns around this should be more firmly established and generalized.
Right now there is a DbtDocsOperator that loads files created by dbt into cloud storage.
Additionally, there is a need to be able to run
dbt parse
ordbt compile
inside the Cosmos runtime and upload artifacts from these runs; see: #870 for why.Considerations
The high level API for DbtDocsOperator is pretty reasonable, and solving this issue can utilize the same high level API as the DbtDocsOperator. That said, there are some minor nitpicks I'd have about the current API:
dbt run
create artifacts.required_files
may be a little awkward. "Required files" seems in contrast to something like an optional file. Should there be optional files? How should Cosmos treat the process of identifying files to be uploaded more generally?target/
directory should be assumed. Notably, what about uploading fromlogs/
? This is irrelevant for most things but it does matter for enabling commands likedbt run
to upload debug logs to cloud storage.^ I think trying to support all of these API nitpicks may not be useful for most people, and perhaps e.g.
logs/
upload should not be supported anddbt run
should not be able to upload artifacts, but it'd be worth explicitly establishing all of that.Some additional requirements and considerations for uploading files:
dbt parse
generatesperf_info.json
graph.gpickle
,graph_summary.json
, orpartial_parse.msgpack
.--no-write-json
is passed,graph.gpickle
is not created.--no-partial-parse
is passed,partial_parse.msgpack
is not created.Some additional requirements for the interface:
dbt_artifacts/{{ run_id }}/
as a location for storing files.target/graph.gpickle
or not?) You can imagine there being an "upload configuration" class that compresses all of this information down to a portable object. Or maybe that is too complex!I don't have answers, just questions. I'm hoping this issue can be a place to discuss what to do.